• tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    8 months ago

    I can believe that it won’t happen in 2024.

    I am pretty confident that in the long run, pretty much everyone is gonna wind up there, though. Like, part of the time spent searching is identifying information on the page and combining from multiple sources. Having the computer do that is gonna be faster than a human.

    There are gonna be problems, like attributability of the original source, poisoning AIs via getting malicious information into their training data, citing the material yourself, and so forth. But I don’t think that those are gonna be insurmountable.

    It’s actually kind of interesting how much using something like an LLM looks like Project Babel in the cyberpunk novel Snow Crash. The AI there was very explicit that it didn’t have reasoning capability, could just take natural-language queries, find information, combine it, and produce a human-format answer. But it couldn’t make judgement calls or do independent reasoning, and regularly rejected queries that required that.

    Though that was intended as an academic tool, not something for the masses, and it was excellent at citing sources, which the existing LLM-based systems are awful at.