some liberals overcorrected against “ai” even though there’s plenty of legitimate reasons to not want it around and especially to not have corpos owning the output.
some liberals overcorrected against “ai” even though there’s plenty of legitimate reasons to not want it around and especially to not have corpos owning the output.
I’m convinced the hyperfocus on generative ai models somehow iNfRiNgInG upon holy copyright protections was entirely a corporate psyop to begin with, because at the end of the day that line of arguing further enshrines the power of corporate property and gives an easy pivot to whitewashing proprietary corporate “not InFrInGiNg” models.
The closest to ethical that AI gets are open source models that can be run locally, and they’re coincidentally the most “infringing” models, while the least ethical ones are the secretive proprietary corporate models being trained on data that’s laundered by corporations unilaterally claiming the right to license it for that purpose.
Like what are the biggest problems? Endless mountains of low-grade slop, mostly coming out of corporate hosted models; companies trying to replace workers with dogshit chatbots, which are 100% proprietary corporate services; media companies threatening to eliminate actors using internal proprietary models they claim they have the property rights to train; etc. Not one problem comes from copyright not being expanded to also cover being able to license and restrict how someone looks at a copyrighted thing, and almost every problem comes from huge corporate property holders with most of the rest coming from petty bourgeois grifters.
We never should have stopped
What if I call it the people’s democratic intellectual property, is it cool now?
one time i read about some libertarian scheme to fund some kind of UBI with “royalties” for ancient inventions like the wheel and firemaking.
Wait, who stopped? Show yourselves!
some liberals overcorrected against “ai” even though there’s plenty of legitimate reasons to not want it around and especially to not have corpos owning the output.
I’m convinced the hyperfocus on generative ai models somehow iNfRiNgInG upon holy copyright protections was entirely a corporate psyop to begin with, because at the end of the day that line of arguing further enshrines the power of corporate property and gives an easy pivot to whitewashing proprietary corporate “not InFrInGiNg” models.
The closest to ethical that AI gets are open source models that can be run locally, and they’re coincidentally the most “infringing” models, while the least ethical ones are the secretive proprietary corporate models being trained on data that’s laundered by corporations unilaterally claiming the right to license it for that purpose.
Like what are the biggest problems? Endless mountains of low-grade slop, mostly coming out of corporate hosted models; companies trying to replace workers with dogshit chatbots, which are 100% proprietary corporate services; media companies threatening to eliminate actors using internal proprietary models they claim they have the property rights to train; etc. Not one problem comes from copyright not being expanded to also cover being able to license and restrict how someone looks at a copyrighted thing, and almost every problem comes from huge corporate property holders with most of the rest coming from petty bourgeois grifters.
Lol I’ll let you argue with them.