LLM Inevitabilism
Debating “Inevitability” vs Choice
- Many see LLMs as effectively inevitable: once a technology is clearly useful and economically powerful, multiple actors will pursue it, making rollback unrealistic short of major collapse or coordinated bans.
- Others argue “inevitability” is a rhetorical move: if you frame a future as unavoidable, you delegitimize opposition and avoid debating whether it’s desirable.
- Several comments distinguish between:
- LLMs as a lasting, ordinary technology (like databases or spreadsheets), and
- Stronger claims that near‑term AGI or mass human obsolescence are destined.
Comparisons to Earlier Tech Waves
- Supporters liken LLMs to the internet or smartphones: rapid organic adoption, hundreds of millions of users, clear individual utility (search-like Q&A, coding help, document drafting).
- Skeptics compare them to Segways, VR, crypto or the “Lisp machine”: loudly hyped, heavily funded, but ultimately niche or re-scoped.
- Counterpoint: none of those “failed” techs had current LLM‑level usage or integration into many workflows.
Economics, Sustainability, and a Possible AI Winter
- Disagreement over whether current LLM use is fundamentally profitable or heavily subsidized:
- Some operators claim ad‑supported consumer usage and pay‑per‑token APIs can be high‑margin.
- Others point to multibillion‑dollar training and datacenter spend, rising prices, and “enshittification” signs (nerfed cheap tiers, opaque limits).
- Concerns include: energy and water use for data centers, finite high‑quality training data, and diminishing returns in model scaling.
Real‑World Usefulness vs Hype
- Many developers report genuine productivity gains for boilerplate, refactoring, docs, glue code, and “junior‑engineer‑level” tasks, especially with careful prompting and tests.
- Others find net‑negative value on complex, legacy codebases: non‑compiling patches, subtle bugs, and high review overhead. Studies are cited suggesting AI‑assisted programmers feel faster but are often slower or introduce more defects.
- Similar splits appear outside coding (writing, law, finance, customer support): from “game‑changer” to “unreliable toy.”
Societal, Psychological, and Ethical Concerns
- Strong unease about AI companions, AI‑generated social sludge, mass disinformation, and loss of genuine human interaction; social media is repeatedly referenced as a warning case.
- Fears that gains will accrue mainly to model owners, deepening inequality and centralization, and that LLM‑based tools will be used to cut labor costs rather than improve lives.
- Some emphasize environmental and geopolitical risks: AI as leverage in trade or sanctions, and as another driver of emissions.
Agency and Governance
- Several argue that past “inevitable” trajectories (industrialization, nuclear, social media) were shaped—though not fully controlled—by policy, labor action, and public resistance.
- The thread repeatedly returns to the idea that LLMs are very likely to persist, but how and where they are deployed, who controls them (centralized clouds vs local/open models), and what is off‑limits remain political choices, not fixed destiny.