• meyotch@slrpnk.net
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    6 hours ago

    I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      6 hours ago

      I don’t think they add user input to their training data like that.

      • kitnaht@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        6 hours ago

        They don’t. The models are trained on sanitized data, and don’t permanently “learn”. They have a large context window to pull from (reaching 200k ‘tokens’ in some instances) but lots of people misunderstand how this stuff works on a fundamental level.