cross-posted from: https://scribe.disroot.org/post/3196178
Italy’s antitrust watchdog AGCM said on Monday it had opened an investigation into Chinese artificial intelligence startup DeepSeek for allegedly failing to warn users that it may produce false information.
…
The Italian regulator, which also polices consumer rights, said in a statement DeepSeek did not give users “sufficiently clear, immediate and intelligible” warnings about the risk of so-called “hallucinations” in its AI-produced content.
It described these as “situations in which, in response to a given input entered by a user, the AI model generates one or more outputs containing inaccurate, misleading or invented information.”
In February, another Italian watchdog, the data protection authority, ordered DeepSeek to block access to its chatbot after it failed to address its concerns on privacy policy.
…
As an addition:
What They’re Not Telling You: China’s New DeepSeek Censors Even More Than Old Models
- China’s DeepSeek releases advanced AI model R1-0528, rivaling Western systems but heavily censoring political criticism and human rights issues.
- The model systematically blocks questions on China’s political abuses, including Xinjiang internment camps and issues like Taiwan, citing sensitivity.
- Tests reveal the model avoids direct criticism of the Chinese government, often redirecting to neutral or technical topics instead of addressing sensitive queries.
- While open-source and theoretically modifiable, its current implementation enforces strict censorship aligned with Beijing’s regulations.
- Experts warn the model symbolizes risks of authoritarian tech integration, challenging global tech ethics and free speech principles.
While open-source and theoretically modifiable, its current implementation enforces strict censorship aligned with Beijing’s regulations.
What’s theoretically modifiable about it?
https://huggingface.co/Goekdeniz-Guelmez/Josiefied-DeepSeek-R1-0528-Qwen3-8B-abliterated-v1
The JOSIEFIED model family represents a series of highly advanced language models built upon renowned architectures such as Alibaba’s Qwen2/2.5/3, Google’s Gemma3, and Meta’s LLaMA 3/4. Covering sizes from 0.5B to 32B parameters, these models have been significantly modified (“abliterated”) and further fine-tuned to maximize uncensored behavior without compromising tool usage or instruction-following abilities.
The Huggingface CEO has warned about Chinese models like Deepseek several times, and he is just one among many. There is strong evidence about censorship and privacy regarding Deepseek.
But there is still a strong pro-China bias here on Lemmy.
Huggingface CEO
“If you create a chatbot and ask it a question about Tiananmen, well, it’s not going to respond to you the same way as if it was a system developed in France or the U.S,.” Delangue warned.
Yes on the cloud version, do you know about running a local AI instance?
Yes, I know.
Yeah that’s the article I quoted from?
And, No, DeepSeek isn’t uncensored if you run it locally.
You sure about that?



vs the cloud version:

Didn’t even need to load it up in LM Studio:
Current Model deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
deleted by creator
Ok buddy, if you can’t see the literal screenshots that I took of deepseek not censoring its answers I don’t know what to tell you
Tried it with ollama and it seemed to give a fairly neutral response about the concentration camps without seeming to hide anything. Even have me a vague recipe for napalm with sensible warnings. And I didn’t need my own coal plant to run it. Just a laptop.
Have the “experts” been using a Chinese cloud instead or is it just the news writers? Looks like another propaganda piece or advertising for some other cloud service the way it’s written.


