OpenAI resets spending expectations, from $1.4T to $600B
Hype, “Commitments,” and Investor Messaging
- Many see the shift from $1.4T to $600B as pure hype: numbers “pulled out of thin air” to keep excitement and valuation high.
- Confusion and criticism around “commitments” vs “expectations”: vendors (e.g., cloud/infra providers) appeared to invest on the back of those signals, only to see them walk back.
- Some say this is deliberate bubble management: try to deflate expectations without popping the whole AI story as investors and partners get spooked.
- Others argue the article itself is sloppy, conflating capex and opex; they claim OpenAI’s projected compute spend through its income statement is actually up, not down.
Feasibility of the Spend and Revenue Projections
- Infrastructure constraints (power, datacenters, GPUs, RAM, storage, permitting) are seen as hard limits; talk of trillions implied power builds (e.g., many nuclear plants) that were never realistic.
- $280B in 2030 revenue is widely doubted: requires ~13x growth from 2026 in just a few years, on products that still feel early and extremely costly to run.
- Debate over profitability: some insist inference can be profitable; others point to large reported losses, massive training spend, staffing, support, sales, and debt, arguing “profitable inference” is a meaningless partial metric.
Product Reality, Competition, and Moats
- Several anecdotes describe genuinely impressive coding agents and workflow gains, with expectations that software engineering and adjacent fields will be heavily disrupted.
- Counterpoints highlight non-determinism, difficulty of reliable QA, and trials where AI made devs slower despite feeling faster.
- Strong consensus that there is no durable moat at the model layer: Claude, Gemini, DeepSeek, and open-source models have closed much of the gap; switching costs are low because the interface is just natural language.
- Metrics from multi-model platforms (like OpenRouter) show OpenAI models not dominating usage there, though participants dispute how representative that is.
Macro, Jobs, and Bubble Talk
- Many question the macro story: spending hundreds of billions while projecting that AI will wipe out a large fraction of jobs—who buys the output?
- UBI is viewed by some as politically implausible; others think elites will be forced into some form of support to avoid unrest.
- Dark scenarios are raised: concentration of wealth and consumption in a small elite, “late-stage capitalism,” and potential taxpayer bailouts if the AI capex bubble pops.
- Multiple comments liken the situation to past bubbles (tulips, South Sea, WeWork, 2008), arguing current valuations and spending plans reflect crowd madness more than grounded economics.