• fullsquare@awful.systems
    link
    fedilink
    arrow-up
    9
    ·
    2 days ago

    it’s trained on entire internet, of course everything is there. tho taking bomb-building advice from an idiot box that can’t count letters in a word is gotta be an entire new type of darwin award

    • Ilovethebomb@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      I mean, that’s part of the issue. We trained a machine on the entire Internet, didn’t vet what we fed in, and let children play with it.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        Can’t see how they would get the monstrous dataset(s) required with indiscriminate vacuuming. If we want to be more discriumate on ingestion parameters, the man hours involved would be boggling.

      • fullsquare@awful.systems
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        well nobody guarantees that internet is safe, so it’s more on chatbot providers pretending otherwise. along with all the other lies about machine god that they’re building that will save all the worthy* in the incoming rapture of the nerds, and even if it destroys everything we know, it’s important to get there before the chinese.

        i sense a bit of “think of the children” in your response and i don’t like it. llms shouldn’t be used by anyone. there was recently a case of a dude with dementia who died after fb chatbot told him to go to nyc

        * mostly techfash oligarchs and weirdo cultists