I’m just a nerd girl.

(Also @umbraroze@slrpnk.net, staying here until that server comes back up, after that, maybe this will be backup, who knows)

  • 0 Posts
  • 16 Comments
Joined 5 days ago
cake
Cake day: June 6th, 2025

help-circle


  • Rose@piefed.socialtomemes@lemmy.worldMalphabet.
    link
    fedilink
    English
    arrow-up
    7
    ·
    5 hours ago

    Alphabet starting with M? Hey, prescriptivist dingdongs, you can’t just say there is now apparently an alphabet that starts with M! Those things don’t exist in vacuum - if you make something like this up, you have to go all the way and define such an alphabet clearly!

    Rules mean nothing unless they clarify established language use conventions! Which is why I think descriptivist approach is better in most cases.


  • So Trump tries to hamper the ICC through infrastructure. Mess with the email, and everything stops, right? That’s how he believes it works?

    Hey, Trump, you may not believe this, but the ICC have these things called “legally defined purpose” and “established procedures”. I know you’ve recently proven that as far as the USA is concerned, those are just theoreticals that you think you can revoke at whim, but unfortunately for you, here in the democratic world they still mean something, as far as I know. No email outage will stop this show.







  • I’m not opposed to AI research in general and LLMs and whatever in principle. This stuff has plenty of legitimate use-cases.

    My criticism comes in three parts:

    1. The society is not equipped to deal with this stuff. Generative AI was really nice when everyone could immediately tell what was generated and what was not. But when it got better, it turns out people’s critical thinking skills go right out of the window. We as a society started using generative AI for utter bullshit. It’s making normal life weirder in ways we could hardly imagine. It would do us all a great deal of good if we took a short break from this and asked what the hell are we even doing here and maybe if some new laws would do any good.

    2. A lot of AI stuff purports to be openly accessible research software released as open source, and stuff is published in scientific journals. But they often have weird restrictions that fly in the face of open source definition (like how some AI models are “open source” but have a cap on users, which makes it non-open by definition). Most importantly, this research stuff is not easily replicable. It’s done by companies with ridiculous amount of hardware and they shift petabytes of data which they refuse to reveal because it’s a trade secret. If it’s not replicable, its scientific value is a little bit in question.

    3. The AI business is rotten to the core. AI businesses like to pretend they’re altruistic innovators who take us to the Future. They’re a bunch of hypemen, slapping barely functioning components together to try to come up with Solutions to problems that aren’t even problems. Usually to replace human workers, in a way that everyone hates. Nothing must stand in their way - not copyright, no rules of user conduct, not social or environmental impact they’re creating. If you try to apply even a little bit of reasonable regulation to this - “hey, maybe you should stop downloading our entire site every 5 minutes, we only update it, like, monthly, and, by the way, we never gave you a permission to use this for AI training” - they immediately whinge about how you’re impeding the great march of human progress or someshit.

    And I’m not worried about AI replacing software engineers. That is ultimately an ancient problem - software engineers come up with something that helps them, biz bros say “this is so easy to use that I can just make my programs myself, looks like I don’t need you any more, you’re fired, bye”, and a year later, the biz bros come back and say “this software that I built is a pile of hellish garbage, please come back and fix this, I’ll pay triple”. This is just Visual Basic for Applications all over again.