Building supercomputers for autocrats probably isn't good for democracy
AI as a Tool for Authoritarian Control
- Many comments argue the main AI danger isn’t “rogue AGI” but that regimes can achieve near‑total social control by:
- Correlating all online writings, stylometry, and other signals to infer identity, beliefs, and early-stage dissent.
- Fusing existing data streams (purchases, communications metadata, social graphs, location, cameras, drones, phones) into a now‑processable surveillance panopticon.
- Some say stylometry at mass scale is technically limited; others counter that:
- Practical demos (e.g., unmasking alts on forums) already work.
- Authoritarians don’t need high accuracy—only plausible signals and a chilling effect.
- LLM-backed “always-on” household devices are seen as making Orwell’s telescreen finally feasible: constantly observing, inferring preferences and political leanings before people consciously form them.
Propaganda, Misinformation, and “Flooding the Zone”
- One view: LLMs’ primary near-term use is as force-multipliers for messaging—cheap, tailored, high-volume BS that can drown out genuine discourse.
- Examples show LLMs easily generating rhetorically strong but unverified arguments on any side of an issue, suitable for automated campaigns.
- Others respond that:
- The internet is already saturated with low-quality content; attention is maxed out.
- People consume information via identity groups and curated channels; more junk may have diminishing marginal impact.
States, Billionaires, and Power Structures
- Debate over whether future power lies more with nation-states or ultra-wealthy individuals:
- One side: states retain decisive advantages (armies, legal control over finance, heavy weapons); billionaires are fragile without state infrastructure.
- Other side: cheaper drones and scalable violence narrow the gap; “tech feudalism” and private fiefs are plausible.
- Some argue it’s more accurate to talk about generic “power structures” than “nations” per se.
OpenAI–UAE Deal and Moral Responsibility
- Key fault line: Should companies simply follow government sanctions lists, or independently refuse to empower autocrats?
- One camp: if the US hasn’t sanctioned UAE, it’s legitimate business; private firms lack mandate or knowledge to be global moral arbiters.
- Opposing camp: “not illegal” ≠ “ethical”; knowingly strengthening repressive regimes is itself wrong, regardless of State Department policy.
- Realpolitik argument: better that US-aligned Gulf monarchies get advanced AI than China; critics reply this is “arming” deeply illiberal regimes with powerful control tech.
Democracies vs Autocracies and Hypocrisy
- Several comments challenge the idea that “democracies good, autocracies bad” cleanly maps to real-world behavior:
- Point to mass violence, invasions, and large prison systems in self-described democracies.
- Note Western tech firms have long sold surveillance and computing tools to repressive states (IBM in Nazi Germany, Cisco/Oracle, Palantir, etc.).
- Nonetheless, many still hold that giving more AI capacity to overtly authoritarian governments predictably worsens repression and is bad for democracy everywhere.