• 0 Posts
  • 161 Comments
Joined 11 months ago
cake
Cake day: June 15th, 2024

help-circle


  • There’s someone close to me whose near entire existence is basically pain. They still draw.

    They hate the idea that their works got sucked by billionaires into giant plagiarism machines that are enriching them further. Pro AI people and tech bros think they should just suck it up and start using fucking AI horde or something, despite the fact that this trend makes them sick and the proposed solutions don’t tackle real issues, but spread or ignore them.

    One of my main gripes with GenAI is the tech industry’s usual disregard for consent. GenAI users saying we should get rid of it altogether doesn’t endear their ideal future to me. Saying the same thing as Sam Altman, but totally in a leftist way, just grosses me out.


  • I’ve heard that many men do this because they’ve realized, in some capacity, that outright admitting they’re right-wing limits their opportunities. In my circles, I’ve noticed this “I’m actually a centrist/apolitical” trend is also found among popular developers and tech influencers.

    Saying you’re anti-woke gets you shunned and surrounded by horrible people, but saying you’re just apolitical gets you the blessing and protection of self-proclaimed centrists. When you, for example, marginalize LGBT folks and get called out, countless will gather to complain about people “dragging politics into tech.” Bryan Lunduke will come out of his cave and write a piece about how the trans fetish is trying to kill open source.


  • Thank you for acknowledging that. And you may be right about blocking.

    I just think it’s difficult for folks, me included, to merely hide what they consider to be an issue. They’re not comparable, but if I saw a self-proclaimed leftist community sharing anti-union propaganda, I’d rather discuss it. I’m not claiming that’s the healthiest mindset or the correct one, but I don’t think it’s entirely without reason.

    These situations, wherein a group broadcasts an idea to everybody, then silences dissent because it’s “their turf, their rules,” never seemed fair. Shields like “they’re trolling”, “neoliberals”, “bots” and “brigading” intensify the issue—some mod comments read like a mirror of r/conservative. Why does the blame lie solely with one side, when the subject is controversial and sharing it with everyone was also a deliberate choice?

    There was talk of an option, for communities, to self-exclude from “all” feeds. Wonder if such features could be a better solution, here. I’ll refrain from talking AI and ethics in db0 in the future, but I feel like they should do better, themselves.













  • mke@programming.devtoMicroblog Memes@lemmy.worldNotepad
    link
    fedilink
    English
    arrow-up
    3
    ·
    27 days ago

    Right. In this instance, with hindsight (noticed it’s a meme community), I wouldn’t say anything. I’ve seen similar cases where the intent was to push someone down, though. I wasn’t sure, and sided with caution.

    I didn’t mean to act uptight, or attack the commenter (I tried a mild tone), my bad.



  • You’re right about it not being inherent to the tech, and I sincerely apologize if I insist too much despite that. This will be my last reply to you. I hope I gave you something constructive to think about rather than just noise.

    The issue, and my point, is that you’re defending a technicality that doesn’t matter in real world usage. Nearly no one uses non-corporate, ethical AI. Most organizations working with it aren’t starting from scratch because it’s disadvantageous or outright unfeasible resourcewise. Instead, they use pre-existing corporate models.

    Edd may not be technically right, but he is practically right. The people he’s referring to are extremely unlikely to be using or creating completely ethical datasets/AI.


  • The vast majority of people don’t think in legal terms, and it’s always possible for something to be both legal and immoral. See: slavery, the actions of the third reich, killing or bankrupting people by denying them health insurance… and so on.

    There are teenagers, even children, who posted works which have been absorbed into AI training without their awareness or consent. Are literal children to blame for not understanding laws that companies would later abuse when they just wanted to share and participate in a community?

    And AI companies aren’t using merely licensed material, they’re using everything they can get their hands on. If they’re pirating you bet your ass they’ll use your nudes if they find them, public domain or not. Revenge porn posted by an ex? Straight into the data.

    So your argument is:

    • It’s legal

    But:

    • What’s legal isn’t necessarily right
    • You’re blaming children before companies
    • AI makers actually use illegal methods, too

    It’s closer to victim blaming than you think.

    The law isn’t a reliable compass for what is or isn’t right. When the law is wrong, it should be changed. IP law is infamously broken in how it advantages and gets (ab)used by companies. For a few popular examples: see how youtube mediates between companies and creators, nintendo suing everyone they can (costs victims more than it does nintendo), everything disney did to IP legislation.