No. You don't get to decide what is put on my personal computing device just because you want to force the general public to bear the burden of protecting children rather than forcing parents to do their fucking jobs.
Yeah. I often forget this one because AI isn't replacing my job any time soon. At best maybe it could potentially be used to streamline some processes to do with tech data and work flow management (what tests and protocols get done when, and combining tests/troubleshooting steps to prevent rework). But that would have to be a very targeted and very very regulated and tested thing before it could be viable.
I think this is a case of the lesser of two evils here. Not being Elon Musk is such a low bar to clear.
Their statements each time something bad happens with their products don't bear out that things will change in a meaningful way any time soon. There are a lot of reasons I'd never ride in one of these but even putting that to the side, objectively they each seem to have significant problems with implementation that are receiving lip service instead of actual fixes.
The crazy thing is, none of these articles seem to want to admit that AI is bad. They keep making articles like this. Keep saying that approval is falling among the general populace. But when touching on why that is, there's always some wiggle words. Always some spin.
It's never "people being forced to use it are seeing it as a detriment to them" people using it are seeing a decrease in efficacy of the results it gives for the amount of prompting required. Or people don't like it because it's going to have significant detrimental affects on the environment and their utilities.
All of those are solid reasons for the decline in both the use of AI LLM'S and the approval of them.
The cost of goods and services relating even tangentially to AI are going through the roof. The amount of slop is increasing at a furious pace, directly contributing to things like enshittification and dead Internet theory. The effect on the economy is looking to be extremely catastrophic.
But oh no. It's lack of authenticity on social media spaces that people are worried about. Sure.
They didn't pass a law. The president signed an EO which only applies to the people who work for him.
"Executive orders are simply signed by the president, primarily to direct the officials in the executive branch, but they do not have the same force of law."
You are assuming that A. Google isn't scraping data for their own AI, B. that these companies will create their own instances (which opens them up to a certain amount of liability and requires them to retain moderation/admin and maintenance staff (which costs money)). C. That the enshittification of corporate owned versions of Lemmy and the fediverse won't push people to Lemmy sooner or later.
A fourth assumption you made is that the Threads federation push was made in order to do anything other than create hype around a feature that might draw people away from places like the fediverse. I kind of assumed (maybe I'm wrong) that they were offering it as a way to have all the benefits of federation - namely the assumption of FOSS adjacent services, but with all the "benefits" of corporate social media.
The truth is that it's likely that Meta absolutely has had a detrimental effect on the fediverse because it has things that pull users away from the fediverse. Instagram has content. For days. And because the fediverse is small (shrinking as you say), and because it doesn't have an algorithm that pushes certain content to certain users, Meta and the other services that have analogs in the fediverse continue to be popular.
A lot of this is because the fediverse still hasn't figured out a way to be profitable to content creators and we no longer live in the early 2000's of YouTube etc where content creation for free was popular.
I'd argue that a lot of the appeal of the fediverse is organic conversation and communication. The popularity of that as a whole is declining because of algorithms that tickle just the right feel good chemicals in our brains.
As for your comment about these corps investing in the fediverse? The only reason for them to do that is if they can make money off it. The major money making scheme the internet is relying on is ad service. So there's a catch 22 here. I would rather donate money to fedi services than have the fediverse infested with ads.
It is possible to be right for wrong reasons. Nothing prevents a general ban on gambling ads from moving forward since underage users might still see them.
I can agree with what you're saying but also say that this is more a case of the road to hell being paved with good intentions.
They wanted to offload their responsibility as parents for enforcing parental controls for their children onto the internet at large, which puts the identities and PII of adults at risk in a way that is increasingly more dangerous. It also directly contributed to the erosion of our privacy.
They also claim to be a grass roots movement and wouldn't claim to be affiliated with a corporation (especially not one involved in gambling). That is an important distinction and they should have their feet put to the fire for it because either they knew and didn't care, or they didn't know and were manipulated.
I don't want that. Part of the fediverse's appeal for me is that people aren't constant trying to sell me things on it.
While I can understand certain communities having "suggest a (game, service, product), for the most part I really don't want to basically invite corps to think this is free real estate. And that's exactly what I think this would do.
It's seems like it would invite corps to basically astroturf Lemmy and the fediverse the way they're doing with bot armies over on reddit.
It's always a good idea to follow the money. A few random bandwagon jumpers screaming about saving the children provided a front for a gambling company. Should we be asking them questions about their involvement in said company? I think we should.
No. I think you misunderstood what the point of my comment was. In one breath you told someone they don't know things and I directly quoted that comment.
Now you're backtracking and saying of course you know these things about yourself. You do not get to have it both ways. Either humans know things and can make actionable decisions and perform actual work based on the things they know and the context they are working in, or they don't and this is a simulation where Gen AI LLM'S do the same things humans do.
Well. This is good news.