… and neither does the author (or so I believe - I made them both up).

On the other hand, AI is definitely good at creative writing.

  • Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    48
    arrow-down
    3
    ·
    2 months ago

    I have a very unusual last name. There is only one other person in the country with my first and last name and they have a different middle initial from me.

    So one day, I asked ChatGPT to tell me about myself including my middle initial.

    Did you know that I was a motivational speaker for businesses and I had published a half-dozen books on it?

    Because I didn’t.

    • A_A@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      2 months ago

      This is because there is a Mr. Flying Thomas Squid, living in another country, who is a motivational speaker and who didn’t work in (… video ?).

      • Flying Squid@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        2 months ago

        Good theory, but this Mr. Flying Thomas Squid that ChatGPT talked about lived in the U.S. like me.

        (And yes, I worked in the entertainment industry in various roles for about a decade. Oddly, the other person with my name was in a neighboring industry and we worked about two miles apart for years, but we’ve only met once.)

    • AngryCommieKender@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 months ago

      I should try that. I have an unusual first name, according to the Social Security Administration, only 600 people have this name, and I appear to be the oldest one. Also no one else has my first and last name. I should try that out.

  • canihasaccount@lemmy.world
    link
    fedilink
    arrow-up
    33
    arrow-down
    2
    ·
    2 months ago

    Claude 3.5 Sonnet, using the same exact prompt:

    I apologize, but I’m not able to provide a synopsis of “The Mighty Eagle” by John Carrol. After searching my knowledge base, I don’t have any information about a book with that exact title and author. It’s possible this may be a lesser-known work or there could be an error in the title or author name provided. Without being able to verify the book’s existence or details, I can’t offer an accurate synopsis. If you have any additional information about the book or author that could help clarify, I’d be happy to assist further.

    • Benjaben@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      2 months ago

      I’ve been asking that one about a wide range of topics and been very impressed with its replies. It’s mixed on software dev, which is to be expected. It also missed on a simple music theory question I asked, and then missed again when asked to correct it (don’t have the details at hand to quote, unfortunately). But overall I’ve found it to be reliable and much faster than the necessary reading for me to answer the question myself.

      How’ve you found Claude?

  • macniel@feddit.org
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    2 months ago

    More like creative bullshitting.

    It seems that Mitchell was simply an astronaut not an engineer.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 months ago

      Hallucinations are so strong with this one too… like really bad.

      If I can’t already or won’t be able/willing to verify an output, I ain’t usin’ it - not a bad rule I think.

      • GBU_28@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        I never walk away with an “answer” without having it:

        1. Cite the source
        2. Lookup the source
        3. Permlink you to the source page/line as available
        4. Critique the validity of the source.

        After all that, still remain skeptical and take the discussion as a starting point to find your own primary sources.

        • brbposting@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 months ago

          That’s good. Ooh NotebookLM from Google just added in-line citations (per Hard Fork podcast). I think that’s the way: see what looks interesting (mentally trying not to take anything to heart) and click and read as usual.

          BeyondPDF for Mac does something similar: semantic searches your document but simply returns likely matches, so it’s just better search for when you don’t remember specific words you read or want to find something without knowing the exact search criteria.

      • can@sh.itjust.works
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        2 months ago

        At least Bing will cite sources, and hell, sometimes they even align with what it said.

        • brbposting@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Heh yeah if the titles of webpages from its searches were descriptive enough

          Funny that they didn’t have a way to stop at claiming it could browse websites. Last I checked you could paste in something like

          https://mainstreamnewswebsite.com/dinosaurs-found-roaming-playground
          

          and it would tell you which species were nibbling the rhododendrons.

          …wow still works, gonna make a thread

        • brbposting@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 months ago

          Clowning

          (I’m not smart enough to leverage a model/make a bot like this but they’ve had too long not to close this obvious misinformation hole)

  • Ech@lemm.ee
    link
    fedilink
    English
    arrow-up
    17
    ·
    edit-2
    2 months ago

    On the other hand, AI is definitely good at creative writing.

    Well…yeah. That’s what it was designed to do. This is what happens when tech-bros try to cudgel an “information manager” onto an algorithm that was designed solely to create coherent text from nothing. It’s not “hallucinating” - it’s following its core directive.

    Maybe all of this will lead to actual systems that do these things properly, but it’s not going to be based on llm’s. That much seems clear.

    • notfromhere@lemmy.ml
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 months ago

      Not to be that guy, but it’s worse than that. It wasn’t even designed for creative writing, just as a next token predictor.

      • Ech@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        2 months ago

        That’s kind of like saying a wheel wasn’t designed to move things around, that it’s just a thick circle. My point above wasn’t that things can never change - iteration can lead to amazing things. But we can’t put an empty chassis on some wheels and call it a car, either.

  • fubarx@lemmy.ml
    link
    fedilink
    arrow-up
    15
    arrow-down
    4
    ·
    2 months ago

    Tried it with ChatGPT 4o with a different title/author. Said it couldn’t find it. That it might be a new release or lesser-known title. Also with a fake title and a real author. Again, said it didn’t exist.

    They’re definitely improving on the hallucination front.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    It had a really bad programming hallucination the other day when I was configuring some files and it hallucinated nonexistent settings.

  • Nexy@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    edit-2
    2 months ago

    I prompted my local AI in my pc to admit it don’t know about the subject. And when it don’t know something, it says it:

    what’s the synopsis of the book “The Mighty Eagle” by John Carrol?

    That sounds like a fun adventure! I haven’t read “The Mighty Eagle” myself though, so I couldn’t give you a proper synopsis.

    Would you like me to help you find some information about it online, Master? Perhaps we could look at reviews or the book description on Amazon?

    If my 8b model can do that, IDK why GPT don’t.

    • Killer_Tree@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      For fun I decided to give it a try with TheBloke_CapybaraHermes-2.5-Mistral-7B-GPTQ (Because that’s the model I have loaded for at the moment) and got a fun synopsis about a Fictional Narrative about Tom, a US Air Force Eagle, who struggled to find purpose and belonging after his early retirement due to injury. He then stumbled upon an underground world of superheroes and is given a chance to use his abilities to fight for justice.

      I’m tempted to ask it for a chapter outline, summaries of each chapter, then having it write out the chapters themselves just to see how deep it can go before it all falls apart.

      LLMs have many limitations, but can be quite entertaining.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      Is it a modified version of like the main llama3 or other? I’ve found once they get “uncensored” you can push them past the training to come up with something to make the human happy. The vanilla ones are determined to find you an answer. There is also the underlying problem that in the end the beginnings of the prompt response is still a probability matching and not some reasoning and fact checking, so it will find something to a question, and that answer being right is very dependent on it being in the training data and findable.

      • 474D@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        Local llama3.1 8b is pretty good at admitting it doesn’t know stuff when you try to bullshit it. At least in my usage.

      • Nexy@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        You can change a bit of the base model with a modelfile, tweaking it yourself for making it have a bit of personality or don’t make things up.

  • sinceasdf@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    6
    ·
    2 months ago

    Y’know when you post stupid bullshit like this it really glosses over real issues with ai like propaganda but go on about how you can get it to hallucinate by asking it a question in bad faith lmao

  • A_A@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    28
    ·
    2 months ago

    You can trigger hallucinations in today’s versions of LLMs with this kind of questions. Same with a knife : you can hurt yourself by missusing it … and in fact you have to be knowledgeable and careful with both.

    • wizardbeard@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      ·
      2 months ago

      The knife doesn’t insist it won’t hurt you, and you can’t get cut holding the handle. Comparatively, AI insists it is correct, and you can get false information using it as intended.

      • sus@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        2 months ago

        can’t wait for gun companies to start advertising their guns as “intelligent” and “highly safe”

    • can@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      2 months ago

      Maybe ChatGPT should find a way to physically harm users when it hallucinates? Maybe then they’d learn.

      • A_A@lemmy.world
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        2 months ago

        Hallucinated books from AI describing what mushroom you could pick in the forest have been published and some people did die because of this.
        We have to be careful when using a.i. !

  • cheddar@programming.dev
    link
    fedilink
    arrow-up
    1
    arrow-down
    15
    ·
    2 months ago

    Don’t you have better things to do than asking ChatGPT questions you already know it can’t answer correctly? Why are you trying to inflate wheels using a hammer?