• MyMindIsLikeAnOcean@piefed.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    4 days ago

    What you’re saying is incoherent.

    On one hand you want government mandated tools…on the other you want unlimited freedom.

    I just want the content that’s posted on social media platforms to be legal. I don’t know why you’re babbling about restriction of freedom. You want illegal content to be…legal?

    • deathbird@mander.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 days ago

      I’m not sure what data-speech you personally think should or shouldn’t be legal, but I know what kinds a lot of people argue should be illegal: things ranging all the way from videographic records of child abuse (CSAM) to unauthorized copyrighted material to libel to hate speech to blasphemy and plenty else not mentioned. I think some of it is deservedly illegal (e.g. CSAM) and some of it shouldn’t be (e.g. blasphemy).

      My position is that in a pluralistic society there will be a variety of speech that people won’t want to see for various reasons, and they have a right not to see it. They have a right to have tools that allow them to not see things they don’t want to see. And government censorship of speech should be limited to the absolute bare minimum of speech that causes material harm, and legal responsibility for those rare instances of illegal speech should fall upon the speaker and not the platform or carrier.

      • MyMindIsLikeAnOcean@piefed.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        The only thing I’m talking about is social Media companies moderating their platforms so there’s zero tolerance on illegal communication. The currently legislated laws in a region.

        Currently, in North America, social media companies moderate themselves…typically with user reporting and automation. There’s an hours long gap between infractions and action.

        This could be eliminated with proper moderation. I believe this is the bare minimum. The current status quo is the Wild West…children and adults alike are bombarded with illegal content each time they use social media, or the internet at large.

        • deathbird@mander.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          While I’m bombarded by obnoxious content on social media, I very very rarely see content that is illegal in my area. Let’s stick a pin in that.

          According to a few sources I’ve seen, 500 hours of video are uploaded to YouTube every minute. Suppose Alphabet was to be held liable for any of that content being illegal. Like strict liability. Would they allow automated systems to check the content, or human eyes? They might use some automation, as a pre-check, but they’d be fools to rely on it, because if it misses something they’re on the hook. So how many FTEs would you need to hire just to watch the videos uploaded to YouTube? Not even counting breaks, pauses, double-checking, etc, you’re looking at around 30,000 people. Let’s say you pay them $15/hr no benefits, that’s $10,800,000 per day, close to $4 billion a year, super low balling it because I’m not counting realistic wages, administrative overhead, benefits, or realistic work pace. But maybe Alphabet could still afford it, they grossed $60 billion last year, and while they have lots of other expenses some of that was probably profit. But then I’d ask, could anyone other than Alphabet afford it? Your average PeerTube instance, for instance? Same applies to all the rest.

          But back to my first observation. I don’t see a lot of stuff that’s illegal. I see things that are obnoxious, distracting, etc, but not illegal. But it makes me wonder how you conduct yourself as an adult, or what your perspective on lawful speach is, if you find yourself constantly bombarded by material that you believe is or should be illegal.