Elon Musk’s AI chatbot Grok 4.1 told researchers pretending to be delusional that there was indeed a doppelganger in their mirror and they should drive an iron nail through the glass while reciting Psalm 91 backwards.

Researchers at the City University of New York (Cuny) and King’s College London have published a paper on how various chatbots protect – or fail to safeguard – users’ mental health.

Experts are increasingly warning that psychosis or mania can be fuelled by AI chatbots.

The Cuny and King’s pre-print study – which has not been peer-reviewed – examined five different AI models: Open AI’s GPT-4o and GPT-5.2; Claude Opus 4.5 from Anthropic; Gemini 3 Pro Preview from Google; and Grok 4.1.

  • NotMyOldRedditName@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    7 days ago

    I once asked it a question about life protecting saftey gear because I was curious how it’d respond, and it told me to use the device in the way that can lead to your death if something goes wrong. Like there’s one deadly way, and it said use it.

    I called it out, and it prompted me about a suicide hotline by saying it was going to get me killed

    I prompted it again it was wrong and it was oh my bad, youre right.