How AI destroys institutions

Meta: Paper Quality, Scope, and HN Context

  • Several commenters note this is a draft, not peer‑reviewed, and argue it reads more like an opinion piece with academic trappings.
  • Critiques focus on: weak or indirect citations (e.g., Engadget/CNN for FDA AI use), superficial or incorrect examples (DOGE, FDA Elsa), typos, and “flowery” language.
  • Others reply that drafts are for feedback, that opinion is part of theory‑building, and that the citation count is high even if uneven.
  • Some find the tone and abstract compelling but are put off by the headline and writing style.

Is AI the Cause or Just an Accelerant?

  • One camp: AI is “throwing gas on the fire” of already‑fragile institutions; it speeds up existing social media–driven isolation, institutional rot, and late‑stage capitalism pathologies.
  • Another: blaming AI misidentifies the root causes (monetary system, profit incentives, corrupt elites, weak regulation). AI is a powerful tool that reflects and amplifies human choices.
  • A few argue AI can reveal institutional weaknesses in a way that could ultimately enable reform rather than destruction.

State and Legitimacy of Institutions

  • Many say universities, press, and “rule of law” were decaying long before AI: captured by money, lobbying, careerism, and political polarization.
  • Others counter that, despite flaws, these institutions are still the best we have and remain essential to democracy; letting them fail without replacements is dangerous.
  • There’s debate over whether “civic institutions” are meaningfully different from the broader “establishment” of corporations, media, and state.

Expertise, Knowledge, and Work

  • Concern: AI erodes expertise by making surface‑level competence cheap, enabling novices to bypass professionals and “knock down Chesterton’s fences.”
  • Counterpoint: this democratization is good—like search engines and open source—letting non‑experts do tasks previously reserved for credentialed elites (coding, basic legal/technical work, learning math).
  • Several note a bimodal effect: careful “expert users” become dramatically more capable, while “easy button” users atrophy, with visible effects in students.

Social, Cognitive, and Informational Effects

  • Commenters echo the paper’s worries that AI short‑circuits critical thinking and encourages offloading judgment, but note this continues trends from smartphones and social media.
  • Some emphasize AI‑driven bots and “dead internet” dynamics that increase chaos and isolation; others stress that AI tutors and assistants can deepen learning and connection when used deliberately.

Law, Politics, and Accountability

  • Some predict professions (especially law) will move aggressively against AI, partly out of self‑preservation, partly due to genuine conflicts with legal norms and accountability.
  • Tools vs users: recurring analogies to guns and petrol—AI may not “intend” harm, but choosing to deploy it into fragile systems is still blameworthy.
  • There’s also unease about censorship, platform control, and how earlier attempts to suppress certain political speech contributed to institutional distrust long before AI.