After the AI boom: what might we be left with?
Singularity vs. Plateau and “Machine God” Narratives
- Some argue current AI is on an irreversible trajectory to a singularity: brains are “just computation,” scaling + smooth loss curves + scaling laws imply unstoppable progress.
- Others counter this looks like past AI booms: big breakthrough → rapid gains → plateau → long tail of niche applications. They see chat capability already stagnating and progress relying on brute-force compute.
- A subset uses quasi-religious language (“machine god”), which critics call irrational hype or “faith,” not evidence-based forecasting.
Intelligence, Decisions, and Limits of LLMs
- One camp insists “computers that can talk and make decisions” are historically profound, comparable to early PCs or the internet.
- Skeptics say LLMs don’t really “decide” or “think”; they’re sophisticated autocomplete detached from the real world, with hallucinations making them untrustworthy for many domains.
- Disagreement over whether language competence implies reasoning: some see LLMs as finally cracking human-like abstract thought; others note non-linguistic cognition and insight that LLMs don’t match.
- Debate over whether hallucinations are a fundamental limitation or mainly a training/behavioral issue that can be reduced to near-human levels.
Symbolic vs. Neural Approaches
- Critics of current LLMs call for new architectures: hybrid symbolic–continuous systems, explicit hierarchies (IS-A / HAS-A), non–gradient-descent learning, and simpler models.
- Defenders invoke the “bitter lesson”: hand-designed symbolic AI largely failed; general-purpose neural learners have set a very high bar, even if still imperfect.
Bubble, Economics, and What Remains
- Many see an economic bubble: massive capex on GPUs, data centers, and frontier training with unclear paths to sustainable profit; inference profitable only under current subsidies.
- Comparisons to dotcom: even if valuations crash, useful infrastructure (models, tooling, datacenters, upgraded power grids) will persist, though chips are shorter-lived than fiber.
- Others argue this isn’t like 2000: AI is deeply integrated into existing “too big to fail” internet giants; demand for tokens and B2B workflows is likely durable, even as many startups die.
GPUs, Obsolescence, and Hardware Reuse
- Disagreement on GPU lifespan: some claim 1–3 years due to obsolescence and heavy use; others say solid-state parts last much longer if cooled properly, with economics (power efficiency, warranties) driving replacement more than wear.
- Expectation that post-bubble hardware will be repurposed: cheaper cloud GPUs, gaming, scientific compute, or local AI; analogies made to cheap post-dotcom servers and routers.
Local Models, Practical Uses, and Hype vs. Reality
- Numerous examples of real value: coding assistance, summarization, translation, policy analysis, self-study with “tutor-like” chat, transcription of archives, niche media search.
- Strong skepticism persists: many users wouldn’t pay real money; some see LLMs as “crypto-like” hype—useful in pockets but far short of their world-changing marketing.
- Local/open models are highlighted as increasingly capable on consumer hardware, suggesting lasting productivity gains even if cloud economics sour.
Robotics and Physical-World AI
- Some see the next frontier in autonomous robots and drones, with AI agents orchestrating physical tasks (“Rosie from the Jetsons”).
- Others stress unsolved problems in dexterous manipulation and touch sensing; brute-force learning of fine motor skills may be far harder than vision and language.
Labor, Capitalism, and Distributional Fears
- Optimists imagine AI driving a path to post-scarcity and UBI-like societies, with humans “retired for life.”
- Pessimists foresee intensified inequality: capital owners benefit; knowledge workers displaced; poor pushed toward “serfdom” in a consumption economy that no longer needs their labor.
- Funding UBI via higher corporate taxes, money printing from productivity gains, or robot-owned consumption is discussed but remains speculative and politically fraught.
Geopolitics and “Too Big to Fail” AI
- A linked essay frames AI as akin to a wartime project: US vs. China race for superintelligence, with AI spending becoming geopolitically non-optional.
- Some agree this locks AI into “too big to fail” status, risking catastrophic fallout if the bet is wrong (debt, misallocated capital, domestic unrest).
- Others view this as US-centric overreach built on unproven assumptions about superintelligence; they argue other regions may benefit by free-riding on open models while avoiding overinvestment.
Cultural Backlash and Future Internet
- A strand expects a neo-Luddite turn: people retreating from hyper-fake, AI-saturated online life into low-tech, offline “genuine” living; social media decaying into an aging, angry fringe.
- Even in that world, AI might remain as a back-end utility layer—answering questions, automating “hard value” tasks—while visible consumer-facing hype shrinks.