AI capex is so big that it's affecting economic statistics
Scale and Nature of AI Capex
- Commenters note AI capex is now ~1.2% of US GDP, which is striking for such a new category but still small vs historical mega-programs (railroads, Apollo, WWII).
- Some argue the framing “eating the economy” overstates things; others emphasize the velocity: going from near-zero to a Norway-sized share of GDP in a couple of years is unprecedented.
- Debate over whether this is truly “AI capex” versus generic cloud/datacenter buildout with an “AI” label; several point out that Nvidia GPU sales and ad-driven ML (Meta, Google) are the real drivers.
Bubble, ROI, and Opportunity Cost
- Strong disagreement on whether this is a bubble: critics ask “how many signs do we need?”; defenders say we’ll only know after the pop, or after clear positive ROI.
- Some highlight inconsistency in claiming AI capex both starves other sectors and fully multiplies GDP—if funds are diverted, the counterfactual multiplier must be considered.
- Others counter that higher expected AI returns raise the hurdle rate, starving marginal non‑AI projects even if the overall economy grows.
- A recurring theme: massive spend on a rapidly depreciating asset (GPUs, short-lived models) vs past capex on century-scale infrastructure (rail, fiber).
Reuse, Depreciation, and Hardware Aftermath
- Concern that, once hype fades, companies may destroy or mothball GPU fleets for tax and logistical reasons; others note that at scale liquidators usually extract value, not landfill it.
- Some hope for repurposing: drug discovery, scientific computing, cheap gaming/VR, or other yet-unknown uses, echoing how dark fiber post‑dotcom later fueled new startups.
- Skepticism that we will systematically reuse everything; layoffs and capacity destruction are seen as more likely in some scenarios.
Energy, Environment, and Power Infrastructure
- Widespread concern about power demand: estimates (within the thread) of US AI datacenters rising to ~70–90 TWh/year and already being a noticeable share of US electricity.
- Heated debate over renewables: some want mandates that every new AI DC be powered by clean energy; others note datacenters need firm, not intermittent, power and that long-duration storage and permitting are real bottlenecks.
- Several point out that big cloud firms are currently among the largest buyers of renewable PPAs and are exploring nuclear (especially small modular reactors), but siting, regulation, and bureaucracy slow deployment.
- Water use and local impacts (cooling, grid capacity, political capture) are recurring worries; some argue the long-lived energy infrastructure is the real durable benefit if AI fizzles.
Labor, Automation, and Inequality
- Major thread on obsession with replacing white‑collar work (developers, lawyers, analysts). Many interpret this as executive desire to cut payroll and shareholder‑driven cost minimization.
- Others frame automation as historically normal: like spreadsheets, CAD, and calculators, AI will compress some job categories (5 people vs 50) but not necessarily eliminate professions.
- There’s visible resentment and schadenfreude: non‑tech workers enjoying the idea that the “learn to code” crowd now faces a similar threat.
- Deep disagreement about whether AI-led efficiency gains will be broadly deflationary and welfare‑enhancing, or just enrich capital owners and further hollow out the middle class.
- Several stress that increased efficiency doesn’t help displaced workers without structural changes (ownership, safety nets, or new domains of demand).
AI Capabilities, Limits, and “Kaboom” Debate
- One camp sees “unstoppable progress”: chess/Go, protein folding, now competition-level math, arguing anything formalizable and cheaply verifiable will eventually be dominated by AI, justifying huge capex.
- Skeptics say the promised “kaboom” hasn’t shown up in drug prices, film/game quality, or clearly transformative non-demo applications; they see impressive toys but fragile systems and lots of slop content.
- Many report real productivity gains for search, coding, and analysis, especially for non-experts; others share frustrating experiences with agents, RAG, and context limits, arguing that current LLM+scaffolding is brittle.
- Dispute over whether LLMs truly “reason” or just approximate reasoning unreliably; some cite recent math benchmarks, others call out hype, unverifiable claims, and bluffing on formal contests.
Historical Analogies and Long-Term Outlook
- Comparisons to railroads, telegraph, dotcom fiber, and nails: earlier overbuilds created stranded assets that later underpinned new waves of innovation, often after investors were wiped out.
- Some note that past capex (rail, Apollo) clearly built broad, durable public goods; in contrast, AI capex might concentrate gains, accelerate capital centralization, and not distribute benefits as widely.
- Several expect an eventual crash in LLM valuations and GPU demand to be “glorious,” leaving behind cheap compute and overbuilt datacenters that future startups repurpose.
- Overall divide: one side views current AI capex as rational investment in a general-purpose technology with decades of productivity gains ahead; the other sees a speculative frenzy burning power, hardware, and money without commensurate, evidenced societal return—yet.