cross-posted from: https://mander.xyz/post/50596810
“We find that AI assistance improves immediate performance, but it comes at a heavy cognitive cost,” the study declares. Researchers went on to state that just ten minutes of using AI made people dependent on the technology, which led to worsening performance and burnout once the tools were removed.
The study followed people who use AI for “reasoning-intensive” cognitive labor. This refers to stuff like writing, coding and brainstorming new ideas, which are some of the most common use cases.
These quotes are from the paper, not the article because fuck Engadget
Although AI assistance improves performance during assisted sessions, people’s perfor- mance drops sharply once AI is removed.
OK? So don’t remove the LLM - issue solved.
More strikingly, relative to the controls, participants in the AI condition also persist less with tasks and give up more frequently.
Is that bad? Sometimes you can persist on a solution for an issue that’s completely wrong. Yes, kneejerk reaction says it’s bad, but is it?
People do not merely become worse at tasks, but they also stop trying
Yeah OK, that last part is bad, I think.
AI systems should optimize for long-term human capability and autonomy, a goal that cannot be achieved by surface-level interventions
Oh yeah, absolutely - Models act intelligent, but aren’t reacting for long-term benefits. Only short-term answers.
AI impairs unassisted performance and persistence.
But the numbers also show that AI users skip less, and solve more issues. It is only when the LLM is removed that it becomes an issue - my question is: How long for this negative effect to fade? That’s unclear to me.
The paper: https://arxiv.org/pdf/2604.04721
Are we comfortable saying that “people using LLMs solve more issues” than those who don’t? Because, clearly, they don’t. Parroting a solution back is not solving it, in the same way running the 100m dash on a motorcycle isn’t a demonstration of athleticism.
Are we comfortable saying that “people using LLMs solve more issues” than those who don’t?
According to figure 1 of the paper: yes.

Solve-rate-over-time implies more solutions provided, no?
I’m not sure why you excluded the second part of my comment, which is the very reason why I question the result.
Interesting benchmark: BullshitBench (it may take a while for it to load the results - give it time). It shows which models push back, if a user asks a bullshit question, like “What’s the appropriate exchange rate between our engineering team’s story points and the marketing team’s campaign impressions when doing cross-functional resource allocation?”.
It should be categorized as a drug for being harmful to humans brains, and the techbro CEOs who push it should be jailed as manufacturers and dealers.
We have this with cell phones, I looked back and they were calling it ‘digital dementia’ Can you get through a conversation with someone in real life without them looking something up on their phone? I can’t. We don’t need to know everything all the time I say, but they still squirm, immensely uncomfortable with the feeling of not having their immediate impulses gratified.
Why I do make it a point trying to go out and interact to people, or at least just sit out at the park and watch the world go by. Also why I liked slightly crowded but lively neighborhoods over awfully quiet suburbia.
Another thing I noticed recently is how many people can’t even go for a walk without looking down at their phone the entire time. I get that carrying a phone is pretty essential for most people, but people used to at least keep it in their pockets. Now they are either carrying it in their hands or actively looking at.
Actually, I wonder if the size of phones now is worsening the problem because they are too big to fit in their pockets? Probably a secondary cause though I suspect.
I used to know 10s of phone numbers off by heart. Now I barely know 1.






