No I get what you’re saying, but your understanding of the world as it exists is incorrect, and your values are for oppression and anti-freedom.
Your incorrect understanding of reality: the on-platform tools that exist currently on Facebook are useless. You are powerless through account settings to limit your exposure to content from strangers on your feed, much less your child’s, except by individually blocking accounts as you see them when logged into the account that you want to block from. Even Bluesky, which also has insufficient tools, is slightly better in this regard. But what few on-platform tools you’re offerd only exist to give you the illusion of control over your experience. Greater control is possible but not offered because it’s less profitable. It could be mandated through law.
Your anti-freedom values: making platforms responsible for user content will destroy them or force severe proactive censorship and real identity policies. None of that is conducive to a free and open society. The fediverse could not exist if servers could be held responsible for what users say or do. Most of the Internet couldn’t exist if one rogue or politically unpopular user could land the service they use in court by offending another.
Your last paragraph is complete nonsense. The way to when an arms race is to come in with bigger arms. That’s where the government comes in, not to force its own will but to restrain companies and empower people. The notion that giving people greater control of their experiences can harm them is insane.
On one hand you want government mandated tools…on the other you want unlimited freedom.
I just want the content that’s posted on social media platforms to be legal. I don’t know why you’re babbling about restriction of freedom. You want illegal content to be…legal?
I’m not sure what data-speech you personally think should or shouldn’t be legal, but I know what kinds a lot of people argue should be illegal: things ranging all the way from videographic records of child abuse (CSAM) to unauthorized copyrighted material to libel to hate speech to blasphemy and plenty else not mentioned. I think some of it is deservedly illegal (e.g. CSAM) and some of it shouldn’t be (e.g. blasphemy).
My position is that in a pluralistic society there will be a variety of speech that people won’t want to see for various reasons, and they have a right not to see it. They have a right to have tools that allow them to not see things they don’t want to see. And government censorship of speech should be limited to the absolute bare minimum of speech that causes material harm, and legal responsibility for those rare instances of illegal speech should fall upon the speaker and not the platform or carrier.
The only thing I’m talking about is social Media companies moderating their platforms so there’s zero tolerance on illegal communication. The currently legislated laws in a region.
Currently, in North America, social media companies moderate themselves…typically with user reporting and automation. There’s an hours long gap between infractions and action.
This could be eliminated with proper moderation. I believe this is the bare minimum. The current status quo is the Wild West…children and adults alike are bombarded with illegal content each time they use social media, or the internet at large.
While I’m bombarded by obnoxious content on social media, I very very rarely see content that is illegal in my area. Let’s stick a pin in that.
According to a few sources I’ve seen, 500 hours of video are uploaded to YouTube every minute. Suppose Alphabet was to be held liable for any of that content being illegal. Like strict liability. Would they allow automated systems to check the content, or human eyes? They might use some automation, as a pre-check, but they’d be fools to rely on it, because if it misses something they’re on the hook. So how many FTEs would you need to hire just to watch the videos uploaded to YouTube? Not even counting breaks, pauses, double-checking, etc, you’re looking at around 30,000 people. Let’s say you pay them $15/hr no benefits, that’s $10,800,000 per day, close to $4 billion a year, super low balling it because I’m not counting realistic wages, administrative overhead, benefits, or realistic work pace. But maybe Alphabet could still afford it, they grossed $60 billion last year, and while they have lots of other expenses some of that was probably profit. But then I’d ask, could anyone other than Alphabet afford it? Your average PeerTube instance, for instance? Same applies to all the rest.
But back to my first observation. I don’t see a lot of stuff that’s illegal. I see things that are obnoxious, distracting, etc, but not illegal. But it makes me wonder how you conduct yourself as an adult, or what your perspective on lawful speach is, if you find yourself constantly bombarded by material that you believe is or should be illegal.
You see illegal content all the time, even if you won’t acknowledge you do.
They should be compelled to do whatever it takes so illegal content isn’t available on their platforms. There’s absolutely no reason user posts, and especially advertisements need to be available instantaneously. This notion that doomscrolling in real time or that being able to broadcast your message to the entire world instantly is somehow comparable to freedom of speech or speaking in a town square is absurd.
I couldn’t give a single rats ass how much it costs. And no…your ballpark figure is absurd - don’t make some sky is falling guesstimation that’s right out of the argument of the very people that don’t want their profits infringed on. More money on moderation = Less money to buy elections and legislation with. We transitioned from print and broadcasting that, while somehow respecting free speech, was pretty fairly regulated. Those“safe”mediums are all but dead because we allowed these social media companies to successfully lobby to avoid all responsibility for illegal content and communication on their platforms.
I’ll give you a practical example of a microcosm: Bots in online games. Bots could be functionally eliminated for the portion of the profits of any MMO. We know they could be because they were…up until the point they realized it’s more profitable to have them. Because they’re not legally compelled to get rid of bots…they’re actually incentivized to allow a certain portion of their population to be bots because it increases engagement. Why? Because we don’t regulate shit…and they companies are drunk on profits and enshitfying everything because there’s no competition in any sector any more (anti-trust, another agent topic: Activision and eventually Microsoft should have never been permitted to purchase a profitable Blizzard).
Take just advertising…let’s put everything aside and agree that social media companies should be responsible for the fucking advertising they post…just like any broadcaster. We can’t even do that. At any given moment advertisers on every social media platform are bombarding users with anonymous advertisements that break every law in the book…from copyright infringement, to election meddling, to sexually explicit material, to illegal gambling and all corners beyond. We can’t even regulate that. It’s fuckery that’s rotting the world. In real time…because we care more about Zuckerbergs bank account than our society.
No I get what you’re saying, but your understanding of the world as it exists is incorrect, and your values are for oppression and anti-freedom.
Your incorrect understanding of reality: the on-platform tools that exist currently on Facebook are useless. You are powerless through account settings to limit your exposure to content from strangers on your feed, much less your child’s, except by individually blocking accounts as you see them when logged into the account that you want to block from. Even Bluesky, which also has insufficient tools, is slightly better in this regard. But what few on-platform tools you’re offerd only exist to give you the illusion of control over your experience. Greater control is possible but not offered because it’s less profitable. It could be mandated through law.
Your anti-freedom values: making platforms responsible for user content will destroy them or force severe proactive censorship and real identity policies. None of that is conducive to a free and open society. The fediverse could not exist if servers could be held responsible for what users say or do. Most of the Internet couldn’t exist if one rogue or politically unpopular user could land the service they use in court by offending another.
Your last paragraph is complete nonsense. The way to when an arms race is to come in with bigger arms. That’s where the government comes in, not to force its own will but to restrain companies and empower people. The notion that giving people greater control of their experiences can harm them is insane.
What you’re saying is incoherent.
On one hand you want government mandated tools…on the other you want unlimited freedom.
I just want the content that’s posted on social media platforms to be legal. I don’t know why you’re babbling about restriction of freedom. You want illegal content to be…legal?
I’m not sure what data-speech you personally think should or shouldn’t be legal, but I know what kinds a lot of people argue should be illegal: things ranging all the way from videographic records of child abuse (CSAM) to unauthorized copyrighted material to libel to hate speech to blasphemy and plenty else not mentioned. I think some of it is deservedly illegal (e.g. CSAM) and some of it shouldn’t be (e.g. blasphemy).
My position is that in a pluralistic society there will be a variety of speech that people won’t want to see for various reasons, and they have a right not to see it. They have a right to have tools that allow them to not see things they don’t want to see. And government censorship of speech should be limited to the absolute bare minimum of speech that causes material harm, and legal responsibility for those rare instances of illegal speech should fall upon the speaker and not the platform or carrier.
The only thing I’m talking about is social Media companies moderating their platforms so there’s zero tolerance on illegal communication. The currently legislated laws in a region.
Currently, in North America, social media companies moderate themselves…typically with user reporting and automation. There’s an hours long gap between infractions and action.
This could be eliminated with proper moderation. I believe this is the bare minimum. The current status quo is the Wild West…children and adults alike are bombarded with illegal content each time they use social media, or the internet at large.
While I’m bombarded by obnoxious content on social media, I very very rarely see content that is illegal in my area. Let’s stick a pin in that.
According to a few sources I’ve seen, 500 hours of video are uploaded to YouTube every minute. Suppose Alphabet was to be held liable for any of that content being illegal. Like strict liability. Would they allow automated systems to check the content, or human eyes? They might use some automation, as a pre-check, but they’d be fools to rely on it, because if it misses something they’re on the hook. So how many FTEs would you need to hire just to watch the videos uploaded to YouTube? Not even counting breaks, pauses, double-checking, etc, you’re looking at around 30,000 people. Let’s say you pay them $15/hr no benefits, that’s $10,800,000 per day, close to $4 billion a year, super low balling it because I’m not counting realistic wages, administrative overhead, benefits, or realistic work pace. But maybe Alphabet could still afford it, they grossed $60 billion last year, and while they have lots of other expenses some of that was probably profit. But then I’d ask, could anyone other than Alphabet afford it? Your average PeerTube instance, for instance? Same applies to all the rest.
But back to my first observation. I don’t see a lot of stuff that’s illegal. I see things that are obnoxious, distracting, etc, but not illegal. But it makes me wonder how you conduct yourself as an adult, or what your perspective on lawful speach is, if you find yourself constantly bombarded by material that you believe is or should be illegal.
You see illegal content all the time, even if you won’t acknowledge you do.
They should be compelled to do whatever it takes so illegal content isn’t available on their platforms. There’s absolutely no reason user posts, and especially advertisements need to be available instantaneously. This notion that doomscrolling in real time or that being able to broadcast your message to the entire world instantly is somehow comparable to freedom of speech or speaking in a town square is absurd.
I couldn’t give a single rats ass how much it costs. And no…your ballpark figure is absurd - don’t make some sky is falling guesstimation that’s right out of the argument of the very people that don’t want their profits infringed on. More money on moderation = Less money to buy elections and legislation with. We transitioned from print and broadcasting that, while somehow respecting free speech, was pretty fairly regulated. Those“safe”mediums are all but dead because we allowed these social media companies to successfully lobby to avoid all responsibility for illegal content and communication on their platforms.
I’ll give you a practical example of a microcosm: Bots in online games. Bots could be functionally eliminated for the portion of the profits of any MMO. We know they could be because they were…up until the point they realized it’s more profitable to have them. Because they’re not legally compelled to get rid of bots…they’re actually incentivized to allow a certain portion of their population to be bots because it increases engagement. Why? Because we don’t regulate shit…and they companies are drunk on profits and enshitfying everything because there’s no competition in any sector any more (anti-trust, another agent topic: Activision and eventually Microsoft should have never been permitted to purchase a profitable Blizzard).
Take just advertising…let’s put everything aside and agree that social media companies should be responsible for the fucking advertising they post…just like any broadcaster. We can’t even do that. At any given moment advertisers on every social media platform are bombarding users with anonymous advertisements that break every law in the book…from copyright infringement, to election meddling, to sexually explicit material, to illegal gambling and all corners beyond. We can’t even regulate that. It’s fuckery that’s rotting the world. In real time…because we care more about Zuckerbergs bank account than our society.