- cross-posted to:
- futurology@futurology.today
- cross-posted to:
- futurology@futurology.today
Yep, we are screwed. They have successfully create something with more potential harm than the iPad.
I also find it crazy that it is addicting. I’m not against using an AI for mental help but you should use a local AI instead.
Just imagine a world where all the people talk like LLMs
Indeed, the potential consequences are concerning. Addiction to AI-driven technologies is a real issue, especially when they become ubiquitous. Utilizing local AI for mental health support could indeed mitigate some risks. However, we must proceed cautiously to navigate the complexities of integrating AI into our daily lives. And yes, envisioning a world where everyone converses like language models is both fascinating and slightly eerie.
Just imagine a word where all the people talk like LLMs
Have you talked to real people lately? I thinking I’m leaning towards the dystopian version of humanity.
I kind of wonder what the character.ai privacy policy is for all these conversations.
I would vaguely imagine that a number of people might not want a full log of their conversations with their psychologist and/or friend to leak or have information extracted from it for arbitrary purposes.
i dunno, too many teenagers have pretty much no concerns about (online) privacy at all
I, too, salute our soon to be digital overlords.
Here is an alternative Piped link(s):
Kate Bush - Deeper Understanding, 1989.
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
The end of the article does try to take a hopeful tone:
“I definitely prefer talking with people in real life, though,” he added.
I don’t necessarily agree with everything though:
While some of the culture around Character.AI is concerning, it also mimics the internet activity of previous generations who, for the most part, have turned out just fine.