• jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    I don’t think we disagree that much.

    So read and learn. Okay, I agree that it can have environmental impact due to power usage and water consumption. But this isn’t a fundamental problem – we can use green power (I’ve heard there are plans to build nuclear plants in California for this reason) and build them in a place without water shortages (i.e. somewhere other than California.) AI differs from fossil fuels in this regard, which are fundamentally environmentally damaging.

    But still, I cringe when someone implies open-model locally-hosted AIs are environmentally problematic. They have no sense of scale whatsoever.

    But it still has to be revised, refined, and inevitably fixed when it hallucinates precedent citations and just about anything else. Well yeah, it’s slop, as I said. These are only suitable in cases where complete reliability is not required. But there’s no reason to believe that hallucinations won’t decrease in frequency over time (as they already have been), or at that the domains in which hallucinations are common won’t shrink over time. I’m not claiming these methods will ever reach 100% reliability, but humans (the thing they are meant to replace) also don’t have reliability. So how many years until the reliability of an LLM exceeds that of a human? Yes I know I’m making humans sound fungible, but to our corporate overlords we mostly are.

    if you haven’t noticed what AI has done to the HR industry, let me summarize it thusly: it has destroyed it.

    Good, so we agree that there is the potential for long-term damage. In other words, AIs are a long-term threat, not just a short-term one. Maybe the bubble will pop but so did the dotcom bubble and we still have the internet.

    enshittification

    No, I think enshittification started well before 2022 (ChatGPT). Sure, even before that LLMs were making SEO garbage webpages that google was reporting, so you can blame AI in that regard – but I don’t believe for a second that Google couldn’t have found a way to filter those kinds of results out. The user-negative feature was profitable for them, so they didn’t fix it. If LLMs hadn’t been around, they would have found other ways to make search more user-negative (and they probably did indeed employ such techniques).