How the AI Bubble Bursts
State of the “AI Bubble”
- Many agree AI is transformative and here to stay, but argue valuations, capex and hype look bubble‑like.
- Others push back: token demand and GPU prices are high, labs and clouds report being compute‑constrained, so they see no imminent “crash.”
- Historical analogies: dot‑coms and railroads (real tech, overinvestment, then shake‑out), versus tulip mania or crypto (pure speculation). Several argue AI is clearly not “tulips” because it has evident utility.
Profitability, Tokens, and Business Models
- Big unresolved question: is inference (serving tokens) truly profitable once you include training, capex, and financing?
- One camp: per‑token inference has high gross margins; labs would “print money” if they stopped training; independent open‑weight providers on thin margins suggest costs are low.
- Other camp: ignoring training and capex is “creative accounting”; ever‑larger models and data centers make costs grow faster than revenue; subscription plans often look subsidized.
- Strong disagreement over whether ARR / run‑rate claims and executive statements can be trusted.
Winners, Competition, and Moats
- Debate over whether this is “winner‑take‑all.”
- Some: brand + integration (e.g., OS, mobile, Office‑like suites) could produce Google‑style dominance.
- Others: models are close substitutes, switching is cheap, and open or Chinese models are quickly catching up; LLM hosting may become a commodity like VPS.
- Unclear if frontier labs can maintain a durable moat once smaller fine‑tuned or domain‑specific models are “good enough.”
Usage, Productivity, and Jobs
- Many engineers report huge productivity gains (especially coding, refactoring, tests, document drafting), sometimes calling AI “best IDE ever.”
- Others say org‑level effects are murky: more code volume, harder review, fragile systems, and no obvious macro productivity bump or mass layoffs yet.
- Disagreement over whether token demand reflects deep value or FOMO‑driven, barely‑measured experiments.
Local Models, Hardware, and RAM
- Some expect local / open models on consumer hardware to erode SaaS LLM economics for many use cases.
- Others doubt locals will match top frontier performance but concede “good enough” may win on price and privacy.
- RAM and GPU markets: prices high today; some expect TurboQuant‑style efficiency to ease pressure, others invoke Jevons paradox (efficiency → bigger models and more tokens, not lower demand).
Meta: HN and Hype
- Several complain HN is polarized and noisy on AI, with both “collapse soon” and “inevitable god‑tech” narratives.
- General expectation: even if the financial bubble bursts, AI tools and models will persist and keep improving.