AI Destroys Institutions

Nature and framing of the paper

  • Several commenters see it as an opinionated, speculative essay about future harms, not evidence-driven research; some criticize its loaded language.
  • Others clarify it’s a law-journal essay/op‑ed hosted on SSRN, not a scientific paper, so persuasive tone is expected.
  • There’s debate over whether calling it a “research paper” on SSRN is misleading or just standard law-school labeling.

Institutions, trust, and pre‑existing decay

  • Many argue institutions (government, press, universities) had already lost public trust long before AI (e.g., economic crises, COVID, wars, corruption).
  • Some think the paper over-romanticizes institutions as transparent, empowering, and risk-tolerant, which clashes with many readers’ lived experience.

Democratization vs centralization and monopoly

  • One side: AI lowers barriers, “democratizes knowledge,” and breaks gatekeeping by experts and credentialed elites.
  • Other side: real power accrues to a few firms hoarding GPUs, RAM, and data; AI becomes a new centralized “clergy,” suppressing the open web and local autonomy.
  • Hardware scarcity and cloud-only access are cited as mechanisms that can entrench monopolies.

Expertise, professions, and “knowledge monopolies”

  • Strong thread about AI eroding expertise: people can generate outputs without understanding, hollowing out real skill and institutional capacity.
  • Others counter that existing institutions already abuse their knowledge monopolies (e.g., law, academia), so weakening them may increase pluralism.

Linguistics, “stochastic parrots,” and Chomsky

  • Heated subthread on whether LLMs challenge Chomskyan ideas about innate Universal Grammar.
  • Some claim LLMs show language can be learned via statistics alone; others respond that models use vastly more data than humans and don’t refute innate capacities.
  • Use of “stochastic parrot” as a dismissive label is criticized as misreading the original paper, which focused on deployment risks.

AI, law, and access to justice

  • Commenters disagree whether lawyers’ criticism is self-interested protectionism or valid concern.
  • Several highlight the extreme cost and procedural complexity of litigation; some would prefer risking an LLM’s bad argument over being priced out entirely.
  • Others insist current LLMs are too unreliable for legal reasoning, pointing to lawyers sanctioned for citing fabricated cases.

Human cognition, skills, and agency

  • Repeated worry that reliance on AI (especially coding assistants) atrophies human skills and “ability to think,” beyond prior tech (calculators, smartphones).
  • Some share personal experiences of feeling lost without autocomplete and decide to do some work AI‑free to preserve competence.
  • Skeptics argue humans were always susceptible to misinformation and cognitive laziness; AI is an amplifier, not a new phenomenon.

Accountability and error

  • Key concern: with humans, you can retrain, reassign, or fire; with models, firms can scapegoat “the AI” and evade responsibility.
  • Others say we should treat AI outputs as tools: the real “mistake” is by humans who rely on them uncritically, and courts so far punish humans, not models.

Printing press and historical analogies

  • One camp likens AI panic to clergy fearing the printing press and losing their monopoly on knowledge.
  • Critics say the analogy is inverted: the internet/printing press are decentralized; LLMs, controlled by a few corporations, may re‑centralize control and spread “witch-hunt” style misinformation at scale.

Software development and institutional knowledge

  • Several connect the paper’s thesis to large software systems: AI can generate code and docs, but understanding remains shallow if humans don’t do the reasoning.
  • Concern that over-reliance on AI design erodes shared mental models of systems, weakening organizational resilience and decision‑making.

Overall split

  • Supporters see AI as an “entropy machine” that hollows out expertise, agency, and the human networks institutions need.
  • Opponents think the paper overstates AI’s uniqueness, ignores existing institutional rot, and underplays AI’s potential to cut costs and broaden access.