Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

  • calcopiritus@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    Energy consumption limit. Every AI product has a consumption limit of X GJ. After that, the server just shuts off.

    The limit should be high enough to not discourage research that would make generative AI more energy efficient, but it should be low enough that commercial users would be paying a heavy price for their waste of energy usage.

    Additionally, data usage consent for generative AI should be opt-in. Not opt-out.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      17 days ago

      Out of curiosity, how would you define a product for that purpose? It’s pretty easy to tweak a few weights slightly.

      • calcopiritus@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        16 days ago

        You can make the limit per-company instead. With big fines if you make thousands of companies to get around the law.

        • CanadaPlus@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          16 days ago

          Ah, so we’re just brainstorming.

          It’s hard to nail down “no working around it” in a court of law. I’d recommend carbon taxes if you want to incentivise saving energy with policy. Cap and trade is also seen as a gold standard option.

          • calcopiritus@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            16 days ago

            Carbon taxes still allow you to waste as much energy as you want. It just makes it more expensive. The objective is to put a limit on how much they are allowed to waste.

            I’m not a lawyer. I don’t know how to make a law without possible exploits, but i don’t think it would be hard for an actual lawyer to make a law with this spirit that is not easily avoided.

            • CanadaPlus@lemmy.sdf.org
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              13 days ago

              It really is hard; I can even think of laws passed this century that turned out to have loopholes. (And FWIW policy writing is a separate discipline)

              Even the most basic laws can have surprising nuances in order to make them specific enough to enforce, as well. I recall a case of a person who tried shoplifting a coat that was chained to the mannequin, and got caught when it went taught. They got off because while they had left the store without paying, being permanently chained to something meant they weren’t technically in possession of the coat.

              Carbon taxes still allow you to waste as much energy as you want. It just makes it more expensive. The objective is to put a limit on how much they are allowed to waste.

              So per person carbon rationing, maybe? During WWII they did something similar with food; you had to pay both cash and ration tokens to buy groceries or visit a restaurant.

              Rationing is fairly out of style because it’s inflexible, though. There’s going to be certain people that have a very legitimate reason to pollute more, and a soft incentive in the form of price allows them to do that if absolutely necessary.

  • jjjalljs@ttrpg.network
    link
    fedilink
    arrow-up
    0
    ·
    17 days ago

    Other people have some really good responses in here.

    I’m going to echo that AI is highlighting the problems of capitalism. The ownership class wants to fire a bunch of people and replace them with AI, and keep all that profit for themselves. Not good.

    • Dr. Moose@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      17 days ago

      Nobody talks how it highlights the success of capitalism either.

      I live in SEA and AI is incredibly powerful here giving opportunity for anyone to learn. The net positive of this is incredible even if you think that copyright is good and intellectual property needs government protection. It’s just that lop sided of an argument.

      I think western social media is spoiled and angry and the wrong thing but fighting these people is entirely pointless because you can’t reason someone out of a position they didn’t reason themselves into. Big tech == bad, blah blah blah.

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        0
        ·
        17 days ago

        You don’t need AI for people to learn. I’m not sure what’s left of your point without that assertion.

        • Dr. Moose@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          17 days ago

          You’re showing your ignorance if you think the whole world has access to fit education. And I say fit because there’s a huge difference learning from books made for Americans and AI tailored experiences just for you. The difference is insane and anyone who doesn’t understand that should really go out more and I’ll leave it at that.

          Just the amount of frictionless that AI removes makes learning so much more accessible for huge percentage of population. I’m not even kidding, as an educator, LLM is the best invention since the internet and this will be very apparent in 10 years, you can quote me on this.

          • jjjalljs@ttrpg.network
            link
            fedilink
            arrow-up
            0
            ·
            17 days ago

            You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine. It is not credible. Maybe if you’re just using it for translation into your native language? I’m not sure if it’s good at that.

            If you have access to the internet, there are many resources available that are more credible. Many of them free.

            • untakenusername@sh.itjust.works
              link
              fedilink
              arrow-up
              0
              ·
              17 days ago

              You shouldn’t trust anything the LLM tells you though, because it’s a guessing machine

              You trust tons of other uncertain probability-based systems though. Like the weather forecast, we all trust that, even though it ‘guesses’ the future weather with some other math

              • jjjalljs@ttrpg.network
                link
                fedilink
                arrow-up
                0
                ·
                17 days ago

                That’s really not the same thing at all.

                For one, no one knows what the weather will be like tomorrow. We have sophisticated models that do their best. We know the capital of New Jersey. We don’t need a guessing machine to tell us that.

                • untakenusername@sh.itjust.works
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  17 days ago

                  For things that require a definite, correct answer, an LLM just isn’t the best tool for it. However if the task is something with many correct answers, or no correct answer, like for instance writing computer code (if its rigorously checked against its actually not that bad) or for analyzing vast amounts of text quickly, then you could make the argument that its the right tool for the job.

  • deadbeef@lemmy.nz
    link
    fedilink
    arrow-up
    0
    ·
    18 days ago

    AI models produced from copyrighted training data should need a license from the copyright holder to train using their data. This means most of the wild west land grab that is going on will not be legal. In general I’m not a huge fan of the current state of copyright at all, but that would put it on an even business footing with everything else.

    I’ve got no idea how to fix the screeds of slop that is polluting search of all kinds now. These sorts of problems ( along the lines of email spam ) seem to be absurdly hard to fix outside of walled gardens.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      18 days ago

      See, I’m troubled by that one because it sounds good on paper, but in practice that means that Google and Meta, who can certainly build licenses into their EULAs trivially, would become the only government-sanctioned entities who can train AI. Established corpos were actively lobbying for similar measures early on.

      And of course good luck getting China to give a crap, which in that scenario would be a better outcome, maybe.

      Like you, I think copyright is broken past all functionality at this point. I would very much welcome an entire reconceptualization of it to support not just specific AI regulation but regulation of big data, fair use and user generated content. We need a completely different framework.

  • awesomesauce309@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    I’m not anti AI, but I wish the people who are would describe what they are upset about a bit more eloquently, and decipherable. The environmental impact I completely agree with. Making every google search run a half cooked beta LLM isn’t the best use of the worlds resources. But every time someone gets on their soapbox in the comments it’s like they don’t even know the first thing about the math behind it. Like just figure out what you’re mad about before you start an argument. It comes across as childish to me

    • HakFoo@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      arrow-down
      1
      ·
      18 days ago

      It feels like we’re being delivered the sort of stuff we’d consider flim-flam if a human did it, but lapping it up bevause the machine did it.

      “Sure, boss, let me write this code (wrong) or outline this article (in a way that loses key meaning)!” If you hired a human who acted like that, we’d have them on an improvement plan in days and sacked in weeks.

      • awesomesauce309@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        So you dislike that the people selling LLMs are hyping up their product? They know they’re all dumb and hallucinate, their business model is enough people thinking it’s useful that someone pays them to host it. If the hype dies Sam Altman is back in a closet office at Microsoft, so he hypes it up.

        I actually don’t use any LLMs, I haven’t found any smart ones. Text to image and image to image models are incredible though, and I understand how they work a lot more.

        • HakFoo@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          18 days ago

          I expect the hype people to do hype, but I’m frustrated that the consumers are also being hypemen. So much of this stuff, especially at the corporate level, is FOMO rather than actually delivered value.

          If it was any other expensive and likely vendor-lockin-inducing adventure, it would be behind years of careful study and down-to-the-dime estimates of cost and yield. But the same people who historically took 5 years to decide to replace an IBM Wheelwriter with a PC and a laser printer are rushing to throw AI at every problem up to and including the men’s toilet on the third floor being clogged.

  • Paradachshund@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    18 days ago

    If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.

    Failing that, any models built on stolen work should be released to the public for free.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      18 days ago

      If we’re going pie in the sky I would want to see any models built on work they didn’t obtain permission for to be shut down.

      I’m going to ask the tough question: Why?

      Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.

      Copyright law lets you download whatever TF you want. It isn’t until you distribute said copyrighted material that you violate copyright law.

      Before generative AI, Google screwed around internally with all those copyrighted works in dozens of different ways. They never asked permission from any of those copyright holders.

      Why is that OK but doing the same with generative AI is not? I mean, really think about it! I’m not being ridiculous here, this is a serious distinction.

      If OpenAI did all the same downloading of copyrighted content as Google and screwed around with it internally to train AI then never released a service to the public would that be different?

      If I’m an artist that makes paintings and someone pays me to copy someone else’s copyrighted work. That’s on me to make sure I don’t do that. It’s not really the problem of the person that hired me to do it unless they distribute the work.

      However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?

      If you believe that it’s not Xerox’s problem then you’re on the side of the AI companies. Because those companies that make LLMs available to the public aren’t actually distributing copyrighted works. They are, however, providing a tool that can do that (sort of). Just like a copier.

      If you paid someone to study a million books and write a novel in the style of some other author you have not violated any law. The same is true if you hire an artist to copy another artist’s style. So why is it illegal if an AI does it? Why is it wrong?

      My argument is that there’s absolutely nothing illegal about it. They’re clearly not distributing copyrighted works. Not intentionally, anyway. That’s on the user. If someone constructs a prompt with the intention of copying something as closely as possible… To me, that is no different than walking up to a copier with a book. You’re using a general-purpose tool specifically to do something that’s potentially illegal.

      So the real question is this: Do we treat generative AI like a copier or do we treat it like an artist?

      If you’re just angry that AI is taking people’s jobs say that! Don’t beat around the bush with nonsense arguments about using works without permission… Because that’s how search engines (and many other things) work. When it comes to using copyrighted works, not everything requires consent.

      • lakemalcom10@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        18 days ago

        Search engines work because they can download and store everyone’s copyrighted works without permission. If you take away that ability, we’d all lose the ability to search the Internet.

        No they don’t. They index the content of the page and score its relevance and reliability, and still provide the end user with the actual original information

      • lakemalcom10@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        18 days ago

        However, if I use a copier to copy a book then start selling or giving away those copies that’s my problem: I would’ve violated copyright law. However, is it Xerox’s problem? Did they do anything wrong by making a device that can copy books?

        This is false equivalence

        LLMs do not wholesale reproduce an original work in it’s original form, they make it easy to mass produce a slightly altered form without any way to identify the original attribution.

      • Cethin@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        18 days ago

        Like the other comments say, LLMs (the thing you’re calling AI) don’t think. They aren’t intelligent. If I steal other people’s work and copy pieces of it and distribute it as if I made it, that’s wrong. That’s all LLMs are doing. They aren’t “being inspired” or anything like that. That requires thought. They are copying data and creating outputs based on weights that tell it how and where to put copied material.

        I think the largest issue is people hearing the term “AI” and taking it at face value. There’s no intelligence, only an algorithm. It’s a convoluted algorithm that is hard to tell what going on just by looking at it, but it is an algorithm. There are no thoughts, only weights that are trained on data to generate predictable outputs based on given inputs. If I write an algorithm that steals art and reorganizes into unique pieces, that’s still stealing their art.

        For a current example, the stuff going on with Marathon is pretty universally agreed upon to be bad and wrong. However, you’re arguing if it was an LLM that copied the artist’s work into their product it would be fine. That doesn’t seem reasonable, does it?

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          18 days ago

          My argument is that the LLM is just a tool. It’s up to the person that used that tool to check for copyright infringement. Not the maker of the tool.

          Big company LLMs were trained on hundreds of millions of books. They’re using an algorithm that’s built on that training. To say that their output is somehow a derivative of hundreds of millions of works is true! However, how do you decide the amount you have to pay each author for that output? Because they don’t have to pay for the input; only the distribution matters.

          My argument is that is far too diluted to matter. Far too many books were used to train it.

          If you train an AI with Stephen King’s works and nothing else then yeah: Maybe you have a copyright argument to make when you distribute the output of that LLM. But even then, probably not because it’s not going to be that identical. It’ll just be similar. You can’t copyright a style.

          Having said that, with the right prompt it would be easy to use that Stephen King LLM to violate his copyright. The point I’m making is that until someone actually does use such a prompt no copyright violation has occurred. Even then, until it is distributed publicly it really isn’t anything of consequence.

          • Cethin@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            17 days ago

            I run local models. The other day I was writing some code and needed to implement simplex noise, and LLMs are great for writing all the boilerplate stuff. I asked it to do it, and it did alright although I had to modify it to make it actually work because it hallucinated some stuff. I decided to look it up online, and it was practically an exact copy of this, down to identical comments and everything.

            It is not too diluted to matter. You just don’t have the knowledge to recognize what it copies.

  • BertramDitore@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    I want real, legally-binding regulation, that’s completely agnostic about the size of the company. OpenAI, for example, needs to be regulated with the same intensity as a much smaller company. And OpenAI should have no say in how they are regulated.

    I want transparent and regular reporting on energy consumption by any AI company, including where they get their energy and how much they pay for it.

    Before any model is released to the public, I want clear evidence that the LLM will tell me if it doesn’t know something, and will never hallucinate or make something up.

    Every step of any deductive process needs to be citable and traceable.

    • davidgro@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      18 days ago

      … I want clear evidence that the LLM … will never hallucinate or make something up.

      Nothing else you listed matters: That one reduces to “Ban all Generative AI”. Actually worse than that, it’s “Ban all machine learning models”.

      • mosiacmango@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        18 days ago

        If “they have to use good data and actually fact check what they say to people” kills “all machine leaning models” then it’s a death they deserve.

        The fact is that you can do the above, it’s just much, much harder (you have to work with data from trusted sources), much slower (you have to actually validate that data), and way less profitable (your AI will be able to reply to way less questions) then pretending to be the “answer to everything machine.”

        • Redex@lemmy.world
          link
          fedilink
          arrow-up
          0
          arrow-down
          1
          ·
          18 days ago

          The way generative AI works means no matter how good the data it’s still gonna bullshit and lie, it won’t “know” if it knows something or not. It’s a chaotic process, no ML algorithm has ever produced 100% correct results.

  • Sunsofold@lemmings.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 days ago

    Magic wish granted? Everyone gains enough patience to leave it to research until it can be used safely and sensibly. It was fine when it was an abstract concept being researched by CS academics. It only became a problem when it all went public and got tangled in VC money.

    • venusaur@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      18 days ago

      Unfortunately, right now the world is providing the greatest level of research for AI.

      I feel like the only thing that the world universally bans is nuclear weapons. AI would have to become so dangerous that the world decides to leave it in the lab, but you can easily make an LLM at home. You can’t just make nuclear power in your room.

      How do you get your wish?

      • Sunsofold@lemmings.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        18 days ago

        If I knew how to grant my wish, it’d be less of a wish and more of a quest. Sadly, I don’t think there’s a way to give patience to the world.

        • venusaur@lemmy.worldOP
          link
          fedilink
          arrow-up
          0
          ·
          16 days ago

          Yeah I don’t think our society is in a position mentally to have patience. We’ve trained our brains to demand a fast-paced variety of gratification at all costs.

          • Sunsofold@lemmings.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 days ago

            We were already wired for it, but we didn’t have access to the thing’s we have now. It takes a lot of wealth to ride the hedonic treadmill, but our societies have reached a baseline wealth where it has become much more achievable to ride it almost all the time.

  • HakFoo@lemmy.sdf.org
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    18 days ago

    Stop selling it a loss.

    When each ugly picture costs $1.75, and every needless summary or expansion costs 59 cents, nobody’s going to want it.

  • OTINOKTYAH@feddit.org
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    18 days ago

    Not destroying but being real about it.

    It’s flawed like hell and feeling like a hype to save big tech companies, while the the enduser getting a shitty product. But companies shoving it into apps and everything, even if it degrades the user expierence (Like Duolingo)

    Also, yes there need laws for that. I mean, If i download something illegaly i will nur put behind bars and can kiss my life goodbye. If a megacorp doing that to train their LLM “it’s for the greater good”. That’s bullshit.

  • sweemoof@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    18 days ago

    The most popular models used online need to include citations for everything. It can be used to automate some white collar/knowledge work but needs to be scrutinized heavily by independent thinkers when using it to try to predict trend and future events.

    As always schools need to be better at teaching critical thinking, epistemology, emotional intelligence way earlier than we currently do and AI shows that rote subject matter is a dated way to learn.

    When artists create art, there should be some standardized seal, signature, or verification that the artist did not use AI or used it only supplementally on the side. This would work on the honor system and just constitute a scandal if the artist is eventually outed as having faked their craft. (Think finding out the handmade furniture you bought was actually made in a Vietnamese factory. The seller should merely have their reputation tarnished.)

    Overall I see AI as the next step in search engine synthesis, info just needs to be properly credited to the original researchers and verified against other sources by the user. No different than Google or Wikipedia.

  • BackgrndNoize@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    17 days ago

    Make it unprofitable for the companies peddling it, by passing laws that curtail its use, by suing them for copyright infringement, by social shaming and shitting on AI generated anything on social media and in person and by voting with your money to avoid anything that is related to it

  • Goldholz @lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    18 days ago

    Shutting these "AI"s down. The once out for the public dont help anyone. They do more damage then they are worth.