• melfie@lemmy.zip
    link
    fedilink
    arrow-up
    22
    ·
    3 days ago

    How is an AI agent any different than any other software just because it does inference with a LLM? If I order something from their website and I get overcharged due to a bug, are they also not responsible? It’s not like agents can’t be tested or like guardrails can’t be put into place.

    I know as a software engineer, I’m responsible for the code in any PR that has my name on it, regardless of what tools I may have used to generate the code, including AI. Are their dev teams not responsible for making sure their shit works?

    • Bluescluestoothpaste@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      3 days ago

      Because most other software the dev understands what he build, or can debug if something is off. LLM are just black boxes the devs have no clue why sometimes the answers are very incorrect.

      • melfie@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        Sure, but AI engineers are well aware of that fact (or should be) and there are ways to limit the potential damage, like human in the middle, especially for purchases over a certain threshold. Overall, a system like this like this should never really be trusted to make purchases without the customer approving each purchase.

        Then again, if you’re going to approve every purchase, I’m not sure how it really saves time. If it is purchasing without approval, the first time it buys something you didn’t want and you have to battle Target to get it refunded will negate any time savings. Largely seems like AI for the sake of AI.

        • Archer@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          2 days ago

          Not if they make all their customer support AI as well and make it impossible to talk to a human!