The Generative AI Con
Perceived Bubble and Business Viability
- Many commenters agree there is a large investment bubble: huge capex, large operating losses (multi‑billion per year), and valuations that seem predicated on near‑term “agents” or AGI that don’t yet exist.
- Debate over whether this is a classic tech bubble (like dot‑coms or railroads) that will eventually yield durable value, or an outright “con” where economics never pencil out.
- Key concern: inference may be cheap at the margin, but training and ongoing R&D burn are massive; revenues (even with millions of users) look small relative to spend.
- Others argue that big tech has so much cash they can absorb years of low returns, and that the losers will mainly be VCs and public shareholders, not the technology itself.
Current Capabilities and Progress
- Skeptics say core model capability has plateaued since GPT‑4: improvements are incremental (longer context, multimodal, lower latency/cost) rather than a new intelligence “phase shift.”
- Supporters counter with examples: better reasoning models (e.g., code benchmarks), multimodal use (photos, screenshots, audio), and substantial efficiency gains on the same hardware.
- Some highlight domain successes beyond LLM chat (protein folding, weather prediction, robotics research), though others note this is broader ML, not specifically chat-style LLMs.
Killer Apps and Everyday Use
- One camp claims there is still no iPhone‑style killer app; if LLMs vanished, most people’s lives wouldn’t materially change.
- The opposing camp insists coding assistance is already a killer app: many report 2–10x productivity gains for certain tasks, rapid MVP building, and heavy daily use.
- Other recurring personal use cases: writing and tone-polishing emails, second‑language support, documentation summarization, research assistance, alt‑text and image understanding, ad‑hoc troubleshooting (appliances, finance, math).
- Even enthusiasts concede most current value is individual/assistant‑style, not robust autonomous agents integrated deeply into production systems.
Developer Productivity and Jobs
- Strong divide: some see LLMs as “junior devs at scale” that free seniors to focus on design; others find them unreliable, flow‑breaking, and net negative due to subtle bugs.
- Concerns about squeezed junior hiring and a future skills gap when current seniors retire.
- Broader worry: productivity gains could reduce demand for labor, with distributional consequences under current capitalism.
Cost, Efficiency, and Infrastructure
- Ongoing argument about whether cheap APIs are subsidized below true cost; insiders claim inference is now profitable per query, with subsidies concentrated in training.
- Comparisons to railroad and telecom build‑outs: enormous overinvestment that later leaves behind useful infrastructure (data centers, GPUs) even if many firms die.
Hype, Media, and Author’s Tone
- Many find the article’s style extremely angry, ad hominem, and rhetorically sloppy, undermining substantive points about bubbles and business models.
- Others say the indignation resonates: they feel gaslit by constant claims of imminent AGI and economic transformation that don’t match their real-world experience.
- General recognition that media and founders have strong incentives to exaggerate both upside (“world-changing AGI”) and downside (“kill us all”) to capture attention and capital.
Historical Analogies and Future Scenarios
- Comparisons to early internet, smartphones, electricity, Manhattan Project, cars vs horses, blockchain, and previous AI winters.
- Shared uncertainty: technology clearly has real use and will persist; open question is which companies, if any, will justify today’s valuations and whether a sharp correction triggers wider economic fallout.