I don't think the right did that to the left. We did it to ourselves. In contrast, the right is somehow really good at putting aside differences to work toward a common goal. I want to know how we can copy that.
Personally, as someone who prefers it when software only does things I direct it to, I'd rather that an LLM not automatically search for the answer online if I didn't ask it to.
The AI should never respond with a confident answer to a prompt it has no idea about.
Agreed. But the technology isn't there yet. It's not shit programming, because the theory of how to solve this problem doesn't even exist yet. I mean, there are some attempts, but nobody has a good solution yet. It's like you're complaining that cars can't go at 500 miles per hour since the technology limits them to 200 mph or so, and blaming this on bad car design when it's actually the user's expectation that's the problem. The user has been mislead by the way things are presented by AI companies, so ultimately it's the AI company's fault for overmarketing their product.
(Fuck cars btw).
They gave it a very reasonable prompt that a grade 1 child could answer, and it failed.
LLMs don't work like grade 1 children. The real problem is that AIs are being marketed in such a way that people are expecting them to be able to be at least as good as anything a grade 1 child can do. But AIs are not humans. They are able to do some things better than any human yet on other tasks they can be outperformed by a kindergartner. This is just how the technology is.
Blame expectations, blame marketing, fuck AI in general, but you've been totally misled if you're expecting it to be able to, say, count the number of letters in a word or break a kanji into components when all it sees are tokens; not letters, not characters.
Yeah. The average person just doesn't have a good intuition about AI, at least not yet. Maybe in a few years people will be burned by it and they'll start to grok its limits, but idk. I still blame the AI companies here.
The knowledge cut-off for GPT5 is 2024 just so you know. Obviously, it would be better if it didn't hallucinate a response to fill in its own blanks. But it's software, so if you're going to use it then please use it like software and not like it's magic.
In general I'm not too moved either way when somebody misuses AI and then posts gobsmacked about how bad it is. Really though, the blame is on AI companies for trying to push AI onto everyone rather than only to domain experts.
I don't think the right did that to the left. We did it to ourselves. In contrast, the right is somehow really good at putting aside differences to work toward a common goal. I want to know how we can copy that.