• Ilovethebomb@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    1 day ago

    I mean, that’s part of the issue. We trained a machine on the entire Internet, didn’t vet what we fed in, and let children play with it.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      20 hours ago

      Can’t see how they would get the monstrous dataset(s) required with indiscriminate vacuuming. If we want to be more discriumate on ingestion parameters, the man hours involved would be boggling.

    • fullsquare@awful.systems
      link
      fedilink
      arrow-up
      6
      ·
      1 day ago

      well nobody guarantees that internet is safe, so it’s more on chatbot providers pretending otherwise. along with all the other lies about machine god that they’re building that will save all the worthy* in the incoming rapture of the nerds, and even if it destroys everything we know, it’s important to get there before the chinese.

      i sense a bit of “think of the children” in your response and i don’t like it. llms shouldn’t be used by anyone. there was recently a case of a dude with dementia who died after fb chatbot told him to go to nyc

      * mostly techfash oligarchs and weirdo cultists