Questions censored by DeepSeek
Nature and extent of DeepSeek censorship
- Many commenters attribute DeepSeek’s behavior to Chinese legal requirements to uphold “Core Socialist Values” and avoid politically sensitive topics (e.g., Tiananmen, Taiwan, Uyghurs).
- Hosted DeepSeek (especially R1 671B on deepseek.com and some US-hosted APIs) often gives stock refusals or CCP‑aligned framings on such prompts, while answering similar questions about other countries.
- Several note that the censorship can be asymmetric: detailed criticism of the US is allowed where criticism of Chinese state actions is blocked.
Hosted vs local, and model confusion
- Strong distinction between:
- DeepSeek-R1 671B (original reasoning model, heavily censored),
- “R1 Zero” (earlier, reportedly less aligned),
- Distilled models (Llama/Qwen fine‑tuned on R1 outputs) used by Ollama, Groq, etc.
- Distilled smaller models often show much weaker or no censorship on Chinese politics, leading to conflicting anecdotes from users who think they’re “running R1 locally” when they’re actually running a distilled Llama/Qwen.
- Some report additional bolt‑on moderation on hosted services: partial answers appear, then are wiped and replaced with a generic refusal.
Technical implementation and jailbreaks
- Debate over whether censorship is:
- post‑hoc filtering of outputs,
- explicit safety fine‑tuning (RLHF),
- or implicit via censored training data.
Evidence suggests all three exist across different Chinese models and hosting setups.
- Users show simple jailbreaks (e.g., leetspeak / ROT13 / alternative encodings) that bypass keyword filters and elicit detailed Tiananmen descriptions.
- Similar multi‑layer safety stacks and browser‑side output filters are described for ChatGPT and other US models.
Comparison with Western LLMs
- Many argue Western models also censor heavily (weapons, self‑harm, “crime stats,” group‑targeted questions, some live political scandals) but frame it as “safety” or “harm reduction.”
- Examples show uneven treatment depending on country, religion, or person, and non‑deterministic refusals.
- Some see Chinese censorship as more overt and state-driven; Western censorship as subtler, corporatized, and still influenced by governments and powerful individuals.
How much this matters
- Split views:
- Some only care about coding/technical tasks and see political censorship as irrelevant.
- Others worry that people increasingly use LLMs instead of search, so embedded propaganda or omitted history is socially dangerous.
- Several call for symmetric audits: similar prompt‑refusal datasets for ChatGPT, Gemini, Grok, etc., not just DeepSeek.