The rational conclusion of doomerism is violence
Violence, AI “doomerism,” and rationality
- Many argue that even believing AI poses existential risk does not make violence rational; “the ends don’t justify the means” and, more practically, lone‑wolf attacks don’t work.
- Others counter that if you truly believe extinction is likely, non‑action or mere blogging seems incoherent, and history shows elites often only respond to force.
- A middle position: it’s coherent to think extinction is coming yet still reject certain means on moral grounds, even if they might be effective.
Effectiveness of political violence
- Several note that individual or “adventurist” attacks (e.g., Molotovs, terrorism) mostly backfire, harden opposition, and are strategically useless.
- Others highlight revolutions, independence wars, and anti‑colonial struggles as cases where organized violence clearly mattered.
- Recurrent theme: organized, mass, politically legible violence can be effective; isolated violence almost never.
Democracy, inequality, and blocked channels of influence
- Some claim participatory democracy is structurally broken and policy tools are “empirically impotent,” so pressure inevitably vents as violence.
- Others respond that unpopularity of AI‑risk views is not a democratic failure but a failure of persuasion; “I didn’t get my way” ≠ “democracy is broken.”
- Rising wealth inequality and an entrenched state monopoly on violence are seen by some as making elite violence routine while delegitimizing popular resistance.
Regulation vs inevitability of AI
- One camp argues AI development is like nuclear arms: strategic pressures mean “someone will build it,” so killing individuals or bombing data centers only slows, never stops, progress.
- Others point to nuclear, biological, chemical, landmine, and ozone treaties as evidence that dangerous tech can be slowed, constrained, or partially forsaken, even if not eliminated.
- There’s concern that AGI races might increase nuclear war risk, if leaders see losing the race as existential defeat.
Critiques of AI‑risk culture and rhetoric
- Some see “P(doom)” argumentation as a Pascal’s‑wager‑style rhetorical move that smuggles in extreme policies once any nonzero extinction risk is conceded.
- Others defend leading AI‑risk advocates as consistently opposing criminal violence and focusing on international regulation, while critics accuse them of earlier “shut it all down, airstrike data centers” extremism and later backpedaling.
Alternative risks and uses for AI
- A minority emphasize climate change as the central existential threat and see advanced robotics/AI as necessary for large‑scale adaptation (firefighting, infrastructure, geo‑response).
- Skeptics question whether robots are needed, citing political/organizational failures as primary obstacles.
Capabilities and limits of current AI
- Several argue current systems can’t design fabs, nukes, or weapons on their own and face hard physical, economic, and energy bottlenecks; exponential growth is self‑limiting.
- Others worry less about “Terminator” scenarios than about AI‑driven social engineering, manipulation, and already‑visible harms (e.g., LLM overconfidence, resource use).