I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.
It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.


My main point there is that when evaluating the impact of some tool, I look at how it is used rather than how it could be used. Arguments like ‘if people were to use it like this or that…’ are not so interesting to me. What I care about is what the actual impact of a thing is, and for that, the only thing that matters is how people actually use it.
Now, a separate thing is my assessment of how people actually use generative AI, and whether I consider the things they do with it a boon for society. I see:
I don’t like these actual things that people are actually using gen AI for. Maybe you see LLMs having different effects and have a different, more positive, assessment. But you cannot separate the assessment of a tool from its users and how they use it, because they’re exactly the ones that’ll be using it, and they’ll use it the way they use it.