I can’t blame you because the media is complicit, and everybody loves a story of good guy versus bad guy, but this is the reality:
No, I acknowledge that the world is a helluva lot more nuanced than "AI bad, military bad, absolute stances good". Absolutism is what we accuse our smooth-brained right-winged asshats of doing, so we certainly shouldn't be caught doing the same thing.
Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy.
You mean drones? You're talking about drones. What's wrong with drones?
We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
They had a contract with the Pentagon. They literally deal with military operations on a regular basis.
Hell, most of the pivotal technology developed in the last thousand years started as a military invention before civilian use. Including this internet thing you're arguing on right now.












Whoever wrote this article didn't even bother to do the most basic of research.
DeepSeek fully admitted they started with ChatGPT outputs to train its model. And then they released it as an open-source model, so that everybody else can "steal" their work. On the image/video front, the general public has created every possible variation on top of every model you can think of. On top of that, any model that has ever been released with full weights has been spun into whatever variation or VRAM size you want.
The ugly truth that the American companies want to hide is the fact that they are spending trillions of dollars on an oligopoly that they can't keep long-term. They hope that they can just keep spending more money to add more billions of parameters to their models, and keep technologically competitive with the secondary open-source models. But, they've already ran into diminishing returns over a year ago, and the global compute sector physically cannot keep up with demand for another cycle of even more diminishing returns.
The other factor is that realistic miniaturization of models is already here. Some of the smaller sizes aren't as effective as the 250GB models they use on cloud-based services, but you can still do a lot with a 16GB or 24GB video card, using models of those sizes. Optimization and LLM quantization is getting better and better each year. The AI bubble burst is going to force a cascade shift into a new era of localization. Everybody is sick to fucking death of renting and subscribing to everything. Us pirates already do so on the media front, and soon localization of LLMs is going to become way more popular.
The question isn't "Can people steal the tech?". It's "how long will people notice that it's already happening?"