I dunno, my feeling is that even if the hype dies down we’re not going back. Like a real transition has happened just like when Facebook took off.
Humans will still be in the loop through their prompts and various other bits and pieces and platforms (Reddit is still huge) … while we may just adjust to the new standard in the same way that many reported an inability to do deep reading after becoming regular internet users.
I think it’ll end up like Facebook (the social media platform, not the company). Eventually you’ll hit model collapse for new models trained off uncurated internet data once a critical portion of all online posts are made by AI, and it’ll become Much more expensive to create quality, up-to-date datasets for new models. Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda and everyone else will move to newer, less enshittified platforms until the cycle repeats.
Maybe we’ll see an increase in discord/matrix style chatroom type social media, since it’s easier to curate those and be relatively confident everyone in a particular server is human. I also think most current fediverse platforms are also marginally more resistant to AI bots because individual servers can have an application process that verifies your humanity, and then defederate from instances that don’t do that.
Basically anything that can segment the Unceasing Firehose of traffic on the big social media platforms into smaller chunks that can be more effectively moderated, ideally by volunteers because a large tech company would probably just automate moderation and then you’re back at square 1.
Honestly, that sounds like the most realistic outcome. If the history of the internet is anything to go by, the bubble will reach critical mass and not so much pop, as slowly deflate when something else begins to grow and take its place of hype.
Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda
Ooof … great way of putting it … “brain melting AI propaganda” … I can almost see a sci-fi short film premised on this image … with the main scene being when a normal-ish person tries to have a conversation with a brain-melted person and we slowly see from their behaviour and language just how melted they’ve become.
Maybe we’ll see an increase in discord/matrix style chatroom type social media, since it’s easier to curate those and be relatively confident everyone in a particular server is human.
Yep. This is a pretty vital project in the social media space right now that, IMO, isn’t getting enough attention, in part I suspect because a lot of the current movements in alternative social media are driven by millennials and X-gen nostalgic for the internet of 2014 without wanting to make something new. And so the idea of an AI-protected space doesn’t really register in their minds. The problems they’re solving are platform dominance, moderation and lock-in.
Worthwhile, but in all serious about 10 years too late and after the damage has been done (surely our society would be different if social media didn’t go down the path it did from 2010 onward). Now what’s likely at stake is the enshitification or en-slop-ification (slop = unwanted AI generated garbage) of internet content and the obscuring of quality human-made content, especially those from niche interests. Algorithms started this, which alt-social are combating, which is great.
But good community building platforms with strong privacy or “enclosing” and AI/Bot protecting mechanisms are needed now. Unfortunately, all of these clones of big-social platforms (lemmy included) are not optimised for community building and fostering. In fact, I’m not sure I see community hosting as a quality in any social media platforms at the moment apart from discord, which says a lot I think. Lemmy’s private and local only communities (on the roadmap apparently) is a start, but still only a modification of the reddit model.
LOL (I haven’t actually met someone like that, in part because I’m not a USian and generally not subject to that sort of type ATM … but I am morbidly curious TBH.
You’re absolutely right about not going back. Web 3.0 I guess. I want to be optimistic that a distinction between all the garbage and actual useful or real information will be visible to people, but like you said, general tech and media literacy isn’t encouraging, hey?
Slightly related, but I’ve actually noticed a government awareness campaign where I live about identifying digital scams. Be nice if that could be extended to incorrect or misleading AI content too.
I dunno, my feeling is that even if the hype dies down we’re not going back. Like a real transition has happened just like when Facebook took off.
Humans will still be in the loop through their prompts and various other bits and pieces and platforms (Reddit is still huge) … while we may just adjust to the new standard in the same way that many reported an inability to do deep reading after becoming regular internet users.
I think it’ll end up like Facebook (the social media platform, not the company). Eventually you’ll hit model collapse for new models trained off uncurated internet data once a critical portion of all online posts are made by AI, and it’ll become Much more expensive to create quality, up-to-date datasets for new models. Older/less tech literate people will stay on the big, AI-dominated platforms getting their brains melted by increasingly compelling, individually-tailored AI propaganda and everyone else will move to newer, less enshittified platforms until the cycle repeats.
Maybe we’ll see an increase in discord/matrix style chatroom type social media, since it’s easier to curate those and be relatively confident everyone in a particular server is human. I also think most current fediverse platforms are also marginally more resistant to AI bots because individual servers can have an application process that verifies your humanity, and then defederate from instances that don’t do that.
Basically anything that can segment the Unceasing Firehose of traffic on the big social media platforms into smaller chunks that can be more effectively moderated, ideally by volunteers because a large tech company would probably just automate moderation and then you’re back at square 1.
Honestly, that sounds like the most realistic outcome. If the history of the internet is anything to go by, the bubble will reach critical mass and not so much pop, as slowly deflate when something else begins to grow and take its place of hype.
Great take.
Ooof … great way of putting it … “brain melting AI propaganda” … I can almost see a sci-fi short film premised on this image … with the main scene being when a normal-ish person tries to have a conversation with a brain-melted person and we slowly see from their behaviour and language just how melted they’ve become.
Yep. This is a pretty vital project in the social media space right now that, IMO, isn’t getting enough attention, in part I suspect because a lot of the current movements in alternative social media are driven by millennials and X-gen nostalgic for the internet of 2014 without wanting to make something new. And so the idea of an AI-protected space doesn’t really register in their minds. The problems they’re solving are platform dominance, moderation and lock-in.
Worthwhile, but in all serious about 10 years too late and after the damage has been done (surely our society would be different if social media didn’t go down the path it did from 2010 onward). Now what’s likely at stake is the enshitification or en-slop-ification (slop = unwanted AI generated garbage) of internet content and the obscuring of quality human-made content, especially those from niche interests. Algorithms started this, which alt-social are combating, which is great.
But good community building platforms with strong privacy or “enclosing” and AI/Bot protecting mechanisms are needed now. Unfortunately, all of these clones of big-social platforms (lemmy included) are not optimised for community building and fostering. In fact, I’m not sure I see community hosting as a quality in any social media platforms at the moment apart from discord, which says a lot I think. Lemmy’s private and local only communities (on the roadmap apparently) is a start, but still only a modification of the reddit model.
I see you have met my Fox News watching parents.
LOL (I haven’t actually met someone like that, in part because I’m not a USian and generally not subject to that sort of type ATM … but I am morbidly curious TBH.
You’re absolutely right about not going back. Web 3.0 I guess. I want to be optimistic that a distinction between all the garbage and actual useful or real information will be visible to people, but like you said, general tech and media literacy isn’t encouraging, hey?
Slightly related, but I’ve actually noticed a government awareness campaign where I live about identifying digital scams. Be nice if that could be extended to incorrect or misleading AI content too.