• prenatal_confusion@feddit.org
    link
    fedilink
    arrow-up
    23
    arrow-down
    7
    ·
    29 days ago

    Isn’t it crazy that 5 years ago we struggled to have a software understand normal sentences? Now this block of text is parsed and the instructions followed. Impressive!

    Not trying to flame, honestly Impressed by some aspects of AI. And I know I am using the the term understood loosely.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        6
        arrow-down
        4
        ·
        29 days ago

        Yeah, that’s the thing where we get into what I call “superstitious prompting”, like when people say “And make sure you don’t make mistakes” or “Include only factual data without hallucinations” and think it works, until it doesn’t.

        It will at least reply in a way that is narratively consistent with being told to do something or another, and will do things like emit the words “Ok, I understand and will promise to only provide fact based feedback”, but doesn’t “understand” at all. It works surprisingly well as being narratively consistent with the prompt frequently looks exactly like following instructions.

        With people getting all the more frustrated when their superstitious prompt fails, they told the LLM to do something or specifically not do something and it even promised exactly to do as directed and then it just proceeds to be normal LLM anyway.

    • real_squids@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      6
      ·
      29 days ago

      I kinda miss gpt-2 days, it’s output was so interesting/funny compared to what llms produce now. Even with image generation I feel like it’s been downhill for years

      • JackbyDev@programming.dev
        link
        fedilink
        English
        arrow-up
        5
        ·
        29 days ago

        🗝️ Keys to success, here’s why

        • Everything looks like this now
        • Not just emojis in headers—it’s em dashes too
        • Delve

        (Honestly it’s mostly the emojis in headers that disgust me.)

    • drath@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      28 days ago

      I’m more disappointed that LLM’s have proven that, to pass Turing test for most people, all you need is essentially a roided out Markov chain. We thought of ourselves as the most advanced species with incredibly complex communications, but it turned out to be mostly yapping in the end…