ITT: People who didn’t check the community name
To be fair, I thought I blocked this community…
The more I see dishonest, blindly reactionary rhetoric from anti-AI people - especially when that rhetoric is identical to classic RIAA brainrot - the more I warm up to (some) AI.
Oh boy here we go downvotes again
regardless o the model you’re using, the tech itself was developed and fine-tuned on stolen artwork with the sole purpose of replacing the artists who made it
that’s not how that works. You can train a model on licensed or open data and they didn’t make it to spite you even if a large group of grifters are but those aren’t the ones developing it
If you’re going to hate something at least base it on reality and try to avoid being so black-and-white about it.
I think his argument is that the models initially needed lots of data to verify and validate their current operation. Subsequent advances may have allowed those models to be created cleanly, but those advances relied on tainted data, thus making the advances themselves tainted.
I’m not sure I agree with that argument. It’s like saying that if you invented a cure for cancer that relied on morally bankrupt means you shouldn’t use that cure. I’d say that there should be a legal process involved against the person who did the illegal acts but once you have discovered something it stands on its own two feet. Perhaps there should be some kind of reparations however given to the people who were abused in that process.
The existing models were all trained on stolen art
I don’t know who this guy is, but I’m with at least on this.
Rejecting the inevitable is dumb. You don’t have to like it but don’t let that hold you back on ethical grounds. Acknowledge, inform, prepare.
You probably create AI slop and present it proudly to people.
AI should replace dumb monotonous shit, not creative arts.
I couldn’t care less about AI art. I use AI in my work every day in dev. The coworkers who are not embracing it are falling behind.
Edit: I keep my AI use and discoveries private, nobody needs to know how long (or little) it took me.
I use gpt to prototype out some Ansible code. I feel AI slop is just fine for that; and I can keep my brain freer of YAML and Ansible, which saves me from alcoholism and therapy later.
Tools have always been used to replace humans. Is anyone using a calculator a shitty person? What about storing my milk in the fridge instead of getting it from the milk man?
I don’t have an issue with the argument, but unless they’re claiming that any tool which replaced human jobs were unethical then their argument is not self consistent and thus lacks any merit.
Edit: notice how no one has tried to argue against this
Would you replace a loved-one (a child, spouse, parent etc.) with an artificial “tool”? Would it matter to you if they’re not real even when you couldn’t tell the difference? And if your answer is yes, you had no trouble replacing a loved-one with an artificial copy, then our views/morals are fundamentally so different that I can’t see us ever agreeing.
It’s like trying to convince me that having sex with animals is awesome and great and they like it too, and I’m just no thanks, that’s gross and wrong, please never talk to me again. I know I don’t necessarily have the strongest logic in the AI (and especially “AI art”) discussion but that’s how I feel.
The people who made calculators didn’t steal anything from mathematicians in order to make them work.
“If you facilitate AI art, you are a shitty person”
There are ethical means to build models using consentually gathered data. He says those artists are shitty.
Didn’t they? Did they get consent from the mathematicians to use their work?
Math is discovery, not creation.
What a completely arbitrary distinction
Hahah love it
Yes, I also think the kitchen knife and the atom bomb are flatly equivalent. Consistency, people!
Such a strong point, can’t believe I didn’t think of that.
I can’t believe I never thought about calculators. You and I really are the brothers dunce, aren’t we?
I like how he made an edit to say no one is arguing his point and the only response he got is arguing his point and then he replies to that with no argument.
He made a completely irrelevant observation. There was no argument. He didnt try to refute anything I said, he tried to belittle the argument. No response was neccassary. If anyone else has reaponded, i havent had a chance to look.
They virtually always do this. People are, very often, not actually motivated by logic and reason; logic and reason are a costume they don to appear more authoritative.
I read it as sarcasm. 🤷
And this is where I split with Lemmy.
There’s a very fragile, fleeting war between shitty, tech bro hyped (but bankrolled) corporate AI and locally runnable, openly licensed, practical tool models without nearly as much funding. Guess which one doesn’t care about breaking the law because everything is proprietary?
The “I don’t care how ethical you claim to be, fuck off” attitude is going to get us stuck with the former. It’s the same argument as Lemmy vs Reddit, compared to a “fuck anything like reddit, just stop using it” attitude.
What if it was just some modder trying a niche model/finetune to restore an old game, for free?
That’s a rhetorical question, as I’ve been there: A few years ago, I used ESRGAN finetunes to help restore a game and (seperately) a TV series. Used some open databases for data. Community loved it. I suggested an update in that same community (who apparently had no idea their beloved “remaster” involved oldschool “AI”), and got banned for the mere suggestion.
So yeah, I understand AI hate, oh do I. Keep shitting on Altman an AI bros. But anyone (like this guy) who wants to bury open weights AI: you are digging your own graves.
Oh, so you deserve to use other people’s data for free, but Musk doesn’t? Fuck off with that one, buddy.
Using open datasets means using data people have made available publicly, for free, for any purpose. So using an AI based on that seems considerably more ethical.
Except gen AI didn’t exist when those people decided on their license. And besides which, it’s very difficult to specify “free to use, except in ways that undermine free access” in a license.
The responsibility is on the copyright holder to use a license they actually understand.
If you license your work with, say, the BSD 0 Clause, you are very explicitly giving away your right to dictate how other people use your work. Don’t be angry if people do so in ways you don’t like.
To be fair, he did say he “used some open databases for data”
Musk does too, if its openly licensed.
Big difference is:
-
X’s data crawlers don’t give a shit because all their work is closed source. And they have lawyers to just smash anyone that complains.
-
X intends to resell and make money off others’ work. My intent is free, transformative work I don’t make a penny off of, which is legally protected.
That’s another thing that worries me. All this is heading in a direction that will outlaw stuff like fanfics, game mods, fan art, anything “transformative” of an original work and used noncommercially, as pretty much any digital tool can be classified as “AI” in court.
-
Just make a user interface for your game bro. No need to bring AI into it
What if it was just some modder trying a niche model/finetune to restore an old game, for free?
Yeah? Well what if they got very similar results with traditional image processing filters? Still unethical?
The effect isn’t the important part.
If I smash a thousand orphan skulls against a house and wet it, it’ll have the same effect as a decent limewash. But people might have a problem with the sourcing of the orphan skulls.
It doesn’t matter if you’we just a wittle guwy that collects the dust from the big corporate orphan skull crusher and just add a few skulls of your own, or you are the big corporate skull crusher. Both are bad people despite producing the same result as a painter that sources normal limewash made out of limestone.
Even if all involved data is explicity public domain?
What if it’s not public data at all? Like artifical collections of pixels used to train some early upscaling models?
That’s what I was getting: some upscaling models are really old, used in standard production tools under the hood, and completely legally licensed. Where do you draw the line between ‘bad’ and ‘good’ AI?
Also I don’t get the analogy. I’m contributing nothing to big, enshittified models by doing hobbyist work, if anything it poisons them by making public data “inbred” if they want to crawl whatever gets posted.
Dude. Read the room. You’re literally in a community called “Fuck AI” and arguing for AI. Are you masochistic or just plain dumb?
Even if the data is “ethically sourced,” the energy consumption is still fucked.
The energy consumption of a single AI exchange is roughly on par with a single Google search back in 2009. Source. Was using Google search in 2009 unethical?
Total nonsense. ESRGAN was trained on potatoes, tons of research models are. I fintune models on my desktop for nickels of electricity; it never touches a cloud datacenter.
At the high end, if you look past bullshiters like Altman, models are dirt cheap to run and getting cheaper. If Bitnet takes off (and a 2B model was just released days ago), inference energy consumption will be basically free and on-device, like video encoding/decoding is now.
Again, I emphasize, its corporate bullshit giving everything a bad name.