• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    7 hours ago

    It’s not useful to talk about the content that LLMs create in terms of whether they “understand it” or don’t. How can you verify if an LLM understands what it’s producing or not? Do you think it’s possible that some future technology might have this understanding? Do humans understand everything they produce? (A lot of people get pretty far by bullshitting.)

    Shouldn’t your argument equally apply to aimbots? After all, does an aimbot really understand the strategy, the game, the je-ne-sais-quoi of high-level play?

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        6 hours ago

        Totally agreed. What most people are not realizing is that bullshit is way more powerful than we could have ever imagined. (I’m also suspecting that humans, including me, bullshit our way through daily life more than we realize.)

        So-called AI “reasoning” essentially works by having the AI produce bullshit, and then read over that bullshit and check how reasonable it sounds (of course, “checking how reasonable it sounds” is also bullshit, but it’s at least systematic bullshit.) This can produce actually very useful results. Obviously, you need to know when is a good time to use AI and when isn’t, which most people still don’t have a good feel for.

        If this sounds absurd, consider how people can do very well in exams on subjects they know nothing about by bullshitting. An LLM can do that, and ontop of that also has been trained on tons more material than any human. So it’s more capable of bullshitting than any human ever could be.

        But still people think it’s useless because “it doesn’t understand.”