• SuperZutsuki [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    9
    ·
    2 months ago

    It’s very telling that they just implemented the AI without even giving its answers any sanity checks near the beginning. Could have caught it day one but no, it’s magic and checking would be a waste of time brainworms

    • I mean, it could’ve worked well at the beginning, then fallen off the rails for some reason or another.

      That’s the dumb and scary thing about AI stuff: it might work today, it might work for years (if you’re lucky), but every time you execute a prompt, you’re rolling the dice on whether the mystery box will decide to just make up some shit from here on out. If you need a person to check the AI’s output to make sure it’s not hallucinating, might as well cut the Ai off from the loop altogether and use the checker’s output from the get-go.