Yeah, no shit? LLMs don't actually know or understand anything, the idea that a scientific paper was retracted means nothing to them, all the training cares about is the patterns of word usage. It doesn't matter that some core part of the paper was wrong, it was SAID, and it is presumed to be grammatically correct.
This is a core reason why using LLMs as a search engine is fucking stupid, they don't filter for what's accurate, only what has been said before.
"There's no such thing as bad press" is such a dogshit saying that is demonstrably untrue. There is absolutely bad press, just because we're talking about it doesn't mean it's good for the product. For every one schmuck who thinks they'll buy Borderlands 4 just to prove how good their computer is, there's a hundred people who have decided never to buy it at all and half if them have written off gearbox entirely.
I don't know where you're getting the idea that people don't care. People do care, a lot of people care a lot. Part of the problem is that people who want to act ethically and politely, people who want to minimize harm, are going to be at a distinct disadvantage when opposing people for whom the ends always justify the means, and the ends they seek are their own betterment above all else.
If your idea of action is sternly worded letters and peaceful protest, and my idea of action is violence and unabashed corruption of authority, you're going to have a real uphill battle to dislodge me from any power I manage to gain. It doesn't mean the corrupt assholes are unstoppable, only that their methods are more aligned with the acquisition and retention of power.
This guy is why PR consultants and Social Media teams exist. Some people just should not have contact with the public. However good he might or might not be in his work on the project itself, someone should have told him to sit down and shut up and let someone who knows how not to damage the entire game with a single statement handle the communication.
I dunno, if they were going to frame someone, they'd pick someone who would much more neatly fit their narrative of The Violent Left. The fact that this guy could even possibly be far right is a pretty good sign, I think, that they didn't pick him to take the fall. That said, "DNA left on the trigger" sure is fishy as hell on it's own.
The idea that ChatGPT, or any LLM, is aware of anything only indicates a fundamental misunderstanding of what LLMs are and how they work.
ChatGPT doesn't know anything, it doesn't understand anything, it is not aware, it is completely fact and reality agnostic. The closet it gets is incorporating a pattern of preexisting speech that correlates concepts. It doesn't understand that cats are soft, it uses the statistical frequency of the concept of cats, and the concept of soft, being used together.
They were already screening refugees, and someone got through anyway. Now they are stopping ALL evacuations until their investigation into one person is complete.
Now imagine that the government read your post and decided that alone was reason to stop all efforts to oppose a literal genocide.
Do you think you expressing your personal opinion really ought to carry that kind of weight? This wasn't some important organization expressing an institutional position, it wasn't some official government declaration, it was one jackass making a social media post, talking shit about people who are literally genociding her people, and because she didn't go quietly into the night they're going to try to send her back to her imminent death, and stop any efforts to save anyone else.
It's not a question of internet posts getting an exemption versus other forms of media, it's a question of whether or not one person mouthing off is reason to condemn countless others to literally die.
Yeah, no shit? LLMs don't actually know or understand anything, the idea that a scientific paper was retracted means nothing to them, all the training cares about is the patterns of word usage. It doesn't matter that some core part of the paper was wrong, it was SAID, and it is presumed to be grammatically correct.
This is a core reason why using LLMs as a search engine is fucking stupid, they don't filter for what's accurate, only what has been said before.