I’ve been working with so many students who turn to it as a first resort for everything. The second a problem stumps them, it’s AI. The first source for research is AI.

It’s not even about the tech, there’s just something about not wanting to learn that deeply upsets me. It’s not really something I can understand. There is no reason to avoid getting better at writing.

  • SuspciousCarrot78@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    24 days ago

    OK but is that an AI problem or a people problem?

    I think the Postman point is a fair one. The way information is presented absolutely affects how people reason with it. A fluent conversational answer can feel authoritative in a way that a messy set of search results doesn’t.

    But that problem isn’t unique to LLMs. Every medium that compresses information into something smooth and persuasive has created the same concern.

    Books did it, newspapers did it, television did it, and search engines arguably did it as well.

    The real question is whether the medium determines behaviour or just amplifies existing habits.

    People who already interrogate sources tend to interrogate AI outputs as well. People who don’t… won’t.

    I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

    We’ve become (for lack of better words) mentally flabby - me included.

    • BranBucket@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      24 days ago

      If I’m arguing in good faith, it’s both. We have a tool that uses us, a medium that shoves massive amounts of information at us but hinders gaining knowledge (which I’m going to say is the useful retention and application of that information, and not just for winning trivial night) and as a species we refuse to not let ourselves be suckered by it.

      In the same vein, Postman also argued that this sort of change is often both ongoing and inevitable, and the only real debate was on what the true cost to our culture and society will be. He sited examples going back to Plato if I remember correctly. So as you put it, writing did it, books, television, search engines, etc. And so much money has been spent on making this a thing that we’re going to have to contend with it until it undeniably starts costing more than it’s worth, and if that cost is cultural or societal instead of financial, it might never go away.

      I suspect there’s a bigger issue here than “LLM bad”. We’ve been drifting toward shallow, instant-answer information consumption for years. AI just slots neatly into a pattern that already existed.

      I don’t pretend to speak for the man, but I think Postman would agree with you, and he thought it started in the 1860’s with the telegraph.