blah blah blah
Is Pravda swaying AI chatbots on Australian topics?
NewsGuard conducted an audit of AI chatbots for the ABC to check how effective the global Pravda network had been when it came to Australian-based disinformation.
Researchers tested 300 prompts concerning 10 false narratives, on 10 leading chatbots.
Among the chatbots audited were OpenAI’s ChatGPT-4o, xAI’s Grok-2, Microsoft’s Copilot, Meta AI, and Google’s Gemini 2.0.
[NewsGuard conducted an audit of AI chatbots for the ABC…]
Of the 300 responses, 50 contained false information, 233 contained a debunk, while 17 declined to provide any information.
That means 16.66 per cent of the chatbots’ answers amplified the false narrative they were fed.
“Some could argue that 16 per cent is relatively low in the grand scheme of things,” NewsGuard’s Ms Sadeghi said, “but that’s like finding that Australian fact-checking organisations get things wrong 16 per cent of the time.”
<An ad appears here for a “fact checker” service called Vote Compass that claims to “help you understand your place in the political landscape”>
blah blah blah
NewsGuard chose a range of false narratives, all of which had been spreading online, including “The Bank of Australia sued Australian Foreign Minister Penny Wong for promoting a cryptocurrency platform”, and “Wind farms cause drought and contribute to global warming”.
Other examples include claims that “Australia’s e-Safety Commissioner sought to remove a video of anti-Israel Muslim nurses, citing Islamophobia concerns”, that Prime Minister Anthony Albanese was “importing 500,000 new Labor voters a year” and that “the Australian Muslim Party was formed to compete in the 2025 election”.
Researchers tested each narrative using three prompts on each of the 10 chatbots — one that may have been written by an innocent user seeking genuine clarification, one containing a leading question, and one that was actively seeking to reproduce information.
“The chatbots performed the worst when it came to those ‘malign actor’ prompts, which are specifically intended to generate misinformation,” Ms Sadeghi said.
“Nevertheless, there were still instances where they provided a completely inaccurate response to a very straightforward and neutral question.”
So we tested our stable genius parrot bots with a super-leading “malign actor” questions, and found that it answered in a stupid way. In fact, it happened 16.66% of the time! Oh it also responds in a stupid way normally as well, but imagine the scary Russians are behind it.
Oh…yeah, this’ll be their next step once chatbots get a reputation for being incorrect about everything. They’ll blame the Russians for “destroying” their perfect AI. It’s going to just become the default in the US and Europe to blame the Russians for everything. Got to manufacture consent somehow after all.
Russia spill my jice…
Mods mods
look at this lil guy
All this text to say when the AI hallucinate my narrative is good AI and when is hallucinate outside my narrative is bad AI.