Just came across this post today and thought it might be a topical issue to discuss from an anarchist perspective.
cross-posted from: https://lemmy.ml/post/29190434
AI has become as a deeply polarizing issue on the left, with many people having concerns regarding its reliance on unauthorized training data, displacement of workers, lack of creativity, and environmental costs. I’m going to argue that while these critiques warrant attention, they overlook the broader systemic context. As Marxists, our focus should not be on rejecting technological advancement but on challenging the capitalist framework that shapes its use. By reframing the debate, we can recognize AI’s potential as a tool for democratizing creativity and accelerating the contradictions inherent in capitalism.
Marxists have never opposed technological progress in principle. From the Industrial Revolution to the digital age, we have understood that technological shifts necessarily proletarianize labor by reshaping modes of production. AI is no exception. What distinguishes it is its capacity to automate aspects of cognitive and creative tasks such as writing, coding, and illustration that were once considered uniquely human. This disruption is neither unprecedented nor inherently negative. Automation under capitalism displaces workers, yes, but our critique must target the system that weaponizes progress against the workers as opposed to the tools themselves. Resisting AI on these grounds mistakes symptoms such as job loss for the root problem of capitalist exploitation.
Democratization Versus Corporate Capture
The ethical objection to AI training on copyrighted material holds superficial validity, but only within capitalism’s warped logic. Intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists. Disney’s ruthless copyright enforcement, for instance, sharply contrasts with its own history of mining public-domain stories. Meanwhile, OpenAI scraping data at scale, it exposes the hypocrisy of a system that privileges corporate IP hoarding over collective cultural wealth. Large corporations can ignore copyright without being held to account while regular people cannot. In practice, copyright helps capitalists far more than it help individual artists. Attacking AI for “theft” inadvertently legitimizes the very IP regimes that alienate artists from their work. Should a proletarian writer begrudge the use of their words to build a tool that, in better hands, could empower millions? The true conflict lies not in AI’s training methods but in who controls its outputs.
Open-source AI models, when decoupled from profit motives, democratize creativity in unprecedented ways. They enable a nurse to visualize a protest poster, a factory worker to draft a union newsletter, or a tenant to simulate rent-strike scenarios. This is no different from fanfiction writers reimagining Star Wars or street artists riffing on Warhol. It’s just collective culture remixing itself, as it always has. The threat arises when corporations monopolize these tools to replace paid labor with automated profit engines. But the paradox here is that boycotting AI in grassroots spaces does nothing to hinder corporate adoption. It only surrenders a potent tool to the enemy. Why deny ourselves the capacity to create, organize, and imagine more freely, while Amazon and Meta invest billions to weaponize that same capacity against us?
Opposing AI for its misuse under capitalism is both futile and counterproductive. Creativity critiques confuse corporate mass-production with the experimental joy of an individual sketching ideas via tools like Stable Diffusion. Our task is not to police personal use but to fight for collective ownership. We should demand public AI infrastructure to ensure that this technology is not hoarded by a handful of corporations. Surrendering it to capital ensures defeat while reclaiming it might just expand our arsenal for the fights ahead.
Creativity as Human Intent, Not Tool Output
The claim that AI “lacks creativity” misunderstands both technology and the nature of art itself. Creativity is not an inherent quality of tools — it is the product of human intention. A camera cannot compose a photograph; it is the photographer who chooses the angle, the light, the moment. Similarly, generative AI does not conjure ideas from the void. It is an instrument wielded by humans to translate their vision into reality. Debating whether AI is “creative” is as meaningless as debating whether a paintbrush dreams of landscapes. The tool is inert; the artist is alive.
AI has no more volition than a camera. When I photograph a bird in a park, the artistry does not lie in the shutter button I press or the aperture I adjust, but in the years I’ve spent honing my eye to recognize the interplay of light and shadow, anticipating the tilt of a wing, sensing the split-second harmony of motion and stillness. These are the skills that allow me to capture images such as this:
Hand my camera to a novice, and it is unlikely they would produce anything interesting with it. Generative AI operates the same way. Anyone can type “epic space battle” into a prompt, but without an understanding of color theory, narrative tension, or cultural symbolism, the result is generic noise. This is what we refer to as AI slop. The true labor resides in the human ability to curate and refine, to transform raw output into something resonant.
People who attack gen AI on the grounds of it being “soulless” are recycling a tired pattern of gatekeeping. In the 1950s, programmers derided high-level languages like FORTRAN as “cheating,” insisting real coders wrote in assembly. They conflated suffering with sanctity, as if the drudgery of manual memory allocation were the essence of creativity. Today’s artists, threatened by AI, make the same error. Mastery of Photoshop brushes or oil paints is not what defines art, it’s a technical skill developed for a particular medium. What really matters is the capacity to communicate ideas and emotions through a medium. Tools evolve, and human expression adapts in response. When photography first emerged, painters declared mechanical reproduction the death of art. Instead, it birthed new forms such as surrealism, abstraction, cinema that expanded what art could be.
The real distinction between a camera and generative AI is one of scope, not substance. A camera captures the world as it exists while AI visualizes worlds that could be. Yet both require a human to decide what matters. When I shot my bird photograph, the camera did not choose the park, the species, or the composition. Likewise, AI doesn’t decide whether a cyberpunk cityscape should feel dystopian or whimsical. That intent, the infusion of meaning, is irreplaceably human. Automation doesn’t erase creativity, all it does is redistribute labor. Just as calculators freed mathematicians from drudgery of arithmetic, AI lowers technical barriers for artists, shifting the focus to concept and critique.
The real anxiety over AI art is about the balance of power. When institutions equate skill with specific tools such as oil paint, Python, DSLR cameras, they privilege those with the time and resources to master them. Generative AI, for all its flaws, democratizes access. A factory worker can now illustrate their memoir and a teenager in Lagos can prototype a comic. Does this mean every output is “art”? No more than every Instagram snapshot is a Cartier-Bresson. But gatekeepers have always weaponized “authenticity” to exclude newcomers. The camera did not kill art. Assembly lines did not kill craftsmanship. And AI will not kill creativity. What it exposes is that much of what we associate with production of art is rooted in specific technical skills.
Finally, the “efficiency” objection to AI collapses under its own short-termism. Consider that just a couple of years ago, running a state-of-the-art model required data center full of GPUs burning through kilowatts of power. Today, DeepSeek model runs on a consumer grade desktop using mere 200 watts of power. This trajectory is predictable. Hardware optimizations, quantization, and open-source breakthroughs have slashed computational demands exponentially.
Critics cherry-pick peak resource use during AI’s infancy. Meanwhile, AI’s energy footprint per output unit plummets year-over-year. Training GPT-3 in 2020 consumed ~1,300 MWh; by 2023, similar models achieved comparable performance with 90% less power. This progress is the natural arc of technological maturation. There is every reason to expect that these trends will continue into the future.
Open Source or Oligarchy
To oppose AI as a technology is to miss the forest for the trees. The most important question is who will control these tools going forward. No amount of ethical hand-wringing will halt development of this technology. Corporations will chase AI for the same reason 19th-century factory owners relentlessly chased steam engines. Automation allows companies to cut costs, break labor leverage, and centralize power. Left to corporations, AI will become another privatized weapon to crush worker autonomy. However, if it is developed in the open then it has the potential to be a democratized tool to expand collective creativity.
We’ve seen this story before. The internet began with promises of decentralization, only to be co-opted by monopolies like Google and Meta, who transformed open protocols into walled gardens of surveillance. AI now stands at the same crossroads. If those with ethical concerns about AI abandon the technology, its development will inevitably be left solely to those without such scruples. The result will be proprietary models locked behind corporate APIs that are censored to appease shareholders, priced beyond public reach, and designed solely for profit. It’s a future where Disney holds exclusive rights to generate “fairytale” imagery, and Amazon patents “dynamic storytelling” tools for its Prime franchises. This is the necessary outcome when technology remains under corporate control. Under capitalism, innovation always serves monopoly power as opposed to the interests of the public.
On the other hand, open-source AI offers a different path forward. Stable Diffusion’s leak in 2022 proved this: within months, artists, researchers, and collectives weaponized it for everything from union propaganda to indigenous language preservation. The technology itself is neutral, but its application becomes a tool of class warfare. To fight should be for public AI infrastructure, transparent models, community-driven training data, and worker-controlled governance. It’s a fight for the means of cultural production. Not because we naively believe in “neutral tech,” but because we know the alternative is feudalistic control.
The backlash against AI art often fixates on nostalgia for pre-digital craftsmanship. But romanticizing the struggle of “the starving artist” only plays into capitalist myths. Under feudalism, scribes lamented the printing press; under industrialization, weavers smashed looms. Today’s artists face the same crossroads: adapt or be crushed. Adaptation doesn’t mean surrender, it means figuring out ways to organize effectively. One example of this model in action was when Hollywood writers used collective bargaining to demand AI guardrails in their 2023 contracts.
Artists hold leverage that they can wield if they organize strategically along material lines. What if illustrators unionized to mandate human oversight in AI-assisted comics? What if musicians demanded royalties each time their style trains a model? It’s the same solidarity that forced studios to credit VFX artists after decades of erasure.
Moralizing about AI’s “soullessness” is a dead end. Capitalists don’t care about souls, they care about surplus value. Every worker co-op training its own model, every indie game studio bypassing proprietary tools, every worker using open AI tools to have their voice heard chips away at corporate control. It’s materialist task of redistributing power. Marx didn’t weep for the cottage industries steam engines destroyed. He advocated for socialization of the means of production. The goal of stopping AI is not a realistic one, but we can ensure its dividends flow to the many, not the few.
The oligarchs aren’t debating AI ethics, they’re investing billions to own and control this technology. Our choice is to cower in nostalgia or fight to have a stake in our future. Every open-source model trained, every worker collective formed, every contract renegotiated is a step forward. AI won’t be stopped any more than the printing press and the internet before it. The machines aren’t the enemy. The owners are.
I don’t think this is a Marxist perspective. It is founded on an assumption that LLMs are capable of positively transforming creative production in one way or another, be it by raising productivity of existing participants or lowering barriers of entry for new ones. This assumption is false, in my opinion, because there is a difference between abundance and dilution. When the market is flooded with lemons of questionable quality, all that is achieved is intensified information asymmetry, meaning that it becomes harder, more labourious, more expensive to distinguish good lemons from bad ones. This leads to a less productive market overall because sellers have to invest extra labour and money to even have a chance to find buyers while buyers invest the extra to buy or produce information that would help discover the good lemons. A good existing example of this situation is the today’s job market.
And if one is to argue that LLM produced slop is the good lemon – or going to be one very soon just you wait Sam Altman / China is developing it so fast! – then they arrive at, in my opinion, the crux of the negative reaction towards the LLM hype. It is not any of the reasons listed in the OP, it is just that people think it is not very useful or good.
Too bad this was all written using an LLM though. I cannot engage further when I know the OP put this little effort in. :(
Was it? I was under the impression @yogthos wrote it.
Nope.
He prompted a bourgeoisie tool to plea to us not to look at the bourgeois plot.
The thing is, the way these models are created is inherently conservative. When used they privilege the past, the what-has-come-before.
This is bad. Not that i can’t see hypothetical use cases this technology is genuinely good for.
But the hype has completely swamped any real utility they could be put to. The marketing efforts of companies like every-single-corporation-ever have created in many a misunderstanding that borders on religion, and utterly obliterated any genuine public utility for this technology. Like modifying the perameters of reality so every time a scalpel touches solid matter, it explodes in a colossal mushroom cloud, obliteratimg the entire city.
This technology is also being used as a kind of abdication machine. A way to build an unaccountable black box to excuse the poor behavior of the powerful-zionist militants use it to say their bombs target militants, judges use it to call their racist bullshit impartial, cops use it to get warrants without evidence.
The eliza effect convinces people a couple lines of language processing is a person, and convinces many that this shit is a god. Literally. It exposes and exploits mental illness like a network attack script does outdated firewalls. It operates in and upon all the blind spots our sick society requires we maintain, lest we rebel or go mad.
It is an incubator of mental illness, a dehumanizing force, and defender of all abuse. To say nothing of its effects on our collective concept of ‘truth’, which i cant even start on right now, but suffice to say that it’s become thoroughly fucked.
The ways it can and always seems to be applied simultaneously to enshittification and the disciplining of labor are reducing standards of living. It functions very well as a machine for making people accept less.
Maybe a better society of better people could use this as a neat niche tool, a fun toy, and an aid to make some knowledge work go faster.
But every part of who we are and how we are organized makes this shit toxic for us. Maybe even more toxic than the fossil fuels we’re burning and water tables we’re depleting to run these fucking things.
Open-source AI models
Do not exist. The whole post is flawed if you believe this.
Came here to say this exactly.
The very little L.O.S.S P.I.S.S. there is, is not being used to praxis anarchism in our lives. Boo to folks not liberating our lives with emancipatory tools.👎
The source code isn’t public?
If you don’t have the training data its useless. That’s the new open source where binaries are “open” and people dont mind.
Check out the AI horde which is a bit like folding@home but for AI.
I looked at the horde. You can’t train your own AI since you dont have all the original material, and it is censored like every other project. Neither open nor anarchist.
Yes, in a perfect world, the use of LLMs and the like (there is no AI), would be a good way to extend human capabilities. We dont live in that world. In the world we live in, LLMs are net negative for workers, the planet and even small businesses, period.
LLMs should only be used in scientific scenarios. Profiting of them should be outlawed and energy usage should generally be priced exponentially, not only for LLMs.
As someone who works in the field, the whole topic is hype only for those not understanding LLMs and those only understanding LLMs and not reality. We have neither the need for large scale LLM rollout, nor do we have a big benefit from it. Stop believing such nonsense please.
So, get us a stateless, classless society and then you can have LLMs, not before.
Actually useful uses for LLMs:
- Text checking - being a language model LLMs can look over written text and provide fixes traditional tools might have a problem with.
- … I guess answering very general or language based questions, the kind that doesn’t require specifics or is specifically about language and understanding meaning. (eg. summerization, definitions etc.)
- … … Entertainment? although an actual human being would almost certainly do a better job.
- Can’t think of any more. I don’t even trust it enough to structure speech into a machine readable output.
In reality I think it actually just needs to be known that LLMs cannot reason and are just guessing and relying on their output is stupid. At that point it becomes really hard to profit off of them as all of the above can be achieved with small models running locally, which you can’t really monetise.
It’s a bubble. It will pop, and everyone laid off because of it will be rehired.
They’re just not really useful. None of this was a huge problem before. LLMs make short work of some scenarios but mostly they take longer to correct than the skilled person would have taken to write from scratch.
LLMs have no redeeming qualities outside of science, none, zero. Binary null.
I run my copy of Deepseek on a RTX 3060 12 gig that cost $300 and uses less than $5 a month of electricity. That’s assuming it ran full tilt 24/7 (it does not). It just helps me write hobby code and read manuals for my tech stack. It has helped my process so much as a Systems admin.
It really is a useful tool and a very pleasing one when you’re not giving OpenAI free access to your credit card.
I think people are right to downplay it a bit. I was watching Alex Avila’s “AI Wars” earlier and he was sort of defending the hype about AIs capabilities. Which I agree with but I don’t think the average person understands the nuances in an LLMs ability. It isn’t thinking at the end of the day. It is trying to satisfy the human audience enough to receive math rewards. It’s a dog doing a really elaborate trick in hopes of a treat. But to a lay person it is fully thinking and feeling.
This is not a dangerous stance per se unless we start hooking ChatGPT up to the “please decide who dies” machine. It just wants to please it’s masters and nothing would make Unitedhealth happier than never having to pay out insurance. Nothing would make the DoD happier than justifying the vaporization of foreign brown people based on an AI analysis.
You’re right, it’s just a tool. And a hammer can build a house or crush a skull. But I think we should be okay with a bit of luddism if it stops people from dangling over the nuke button and saying the hammer falling gets to kill us all.
I don’t care. Turn it all off. What is the point? The AI we have now is shit and will not improve our lives even under socialism. Fuck it. To death.
This is my stance and it is about as useful as yours for us, because capitalism will use this technology to make all our lives worse, while ignoring any high-minded ideals that may be technically possible. I literally do not care though, let’s get rid of capital first and then worry about this shit. Pointless really, to debate it now.
Spicy autocomplete can do some awesome things, but comparing it to the invention of the printing press seems at best premature. Strong AI of the kind we always imagined is still n years away as it has been since the 1960s.
One danger of the current stuff is that people will anthropomorphize it, overestimate its abilities, and misapply it. The problem is not that it lacks creativity. Random rolls of the dice can also be useful as a creative tool. The problem that bothers me more is that the machines lack all taste, morality, and understanding while giving many people the false impression that they do have these things. From the Google users who mistakenly believe every AI-generated summary of search results to the venture capital firm that wastes a billion dollars on the premise that the machine is now infallible, it seems to have a tendency to lead us individually and collectively into absurd fantasy worlds as we project onto it our wildest dreams about meeting a superhuman intelligence.
Maybe progress will be rapid and it will all be different ten years from now, maybe not. What we have now is a small collection of new and potentially useful tools which seem capable of providing some more surprises here and there, good ones and bad, as we adapt to their existence — but not a miracle that will transform everything. It makes sense to criticise the more shallow of the arguments against it all and perhaps it can inform criticism of capitalism in new ways, but be careful not to buy into the hype too much.