• pfm
    cake
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    6 months ago

    My main concern with people making fun of such cases is about deficiencies of “AI” being harder to find/detect but obviously present.

    Whenever someone publishes a proof of a system’s limitations, the company behind it gets a test case to use to improve it. The next time we - the reasonable people arguing that cybernetic hallucinations aren’t AI yet and are dangerous - try using such point, we would only get a reply of “oh yeah, but they’ve fixed it”. Even people in IT often don’t understand what they’re dealing with, so the non-IT people may have even more difficulties…

    Myself - I just boycott this rubbish. I’ve never tried any LLM and don’t plan to, unless it’s used to work with language, not knowledge.