Conspiracy theorists can be deprogrammed
AI “Deprogramming” as Tool and Threat
- Many see using AI to “deprogram” conspiracy theorists as inherently political: deprogramming is just re-programming to someone else’s agenda.
- Concern that such systems could easily become government or corporate propaganda tools, worse than a minority of conspiracy believers.
- Others counter that this “tool” already exists and is being used; better to do it transparently than leave the field to opaque actors.
Can AI Also Radicalize?
- Multiple comments argue the reverse is not only possible but already happening: social media algorithms bombard people with “alternative facts” for engagement.
- Several say it’s easier and more profitable to create new believers than to deprogram.
- Some think the real asymmetry is that extremist and Nazi-style content has many promoters and few effective deprogrammers.
Conspiracies: Noise vs Signal
- One camp: conspiracy theories mostly add noise, making genuine conspiracies harder to detect (e.g., QAnon obscuring real trafficking).
- Another: skepticism toward authority is rational; some “theories” later proved true (Watergate, industry cover-ups, etc.), so blanket pathologizing is wrong.
- Debate over whether elite coordination is mostly “just incentives” or effectively a conspiracy in all but name.
Trust, Authority, and Epistemology
- Repeated theme: conspiracists don’t actually reject authority; they just relocate it—from institutions to podcasts, influencers, and anonymous accounts.
- Split between those who see conspiracists as curious but under-informed vs. those who see them as preferring emotionally satisfying narratives over primary sources.
- Once institutional trust is broken, some say it is nearly impossible to restore; LLMs that follow “official lines” only deepen suspicion.
Study Design and Definition Issues
- Criticism that the underlying research redefines “conspiracy theory” as “untrue conspiracy,” excluding widely accepted real conspiracies (e.g., lobbying, corporate cover-ups) and thereby baking in ideological bias.
- Objections to heavy reliance on GPT-4 for screening without human validation.
Social Media, Addiction, and Regulation
- Some argue treating social media addiction and regulating recommendation systems would be more effective than AI deprogramming.
- Proposals include legal penalties for knowingly spreading political misinformation, but others warn this quickly becomes censorship.
LLMs as Socratic Partners
- Several see value in AI’s patience and Socratic questioning to foster self-reflection without human fatigue.
- Anecdotes show LLMs can generate strong counter-arguments, but may need human “shepherding” and risk being dismissed as partisan or censored.