OpenAI raises $110B on $730B pre-money valuation
Valuation, bubble talk, and comparisons
- Many view the $730B pre-money valuation as bubble territory, comparing this to dot-com, crypto, or WeWork/Tesla-style hype: massive revenue but far from proven long‑term economics.
- Others argue the valuation implicitly assumes AGI‑scale impact, not just “better SaaS,” and is therefore extremely risky but not obviously irrational at current hype levels.
- Skeptics note OpenAI’s losses, heavy capital needs, and lack of a clear, durable moat; some say on revenue alone it looks more like a tens‑of‑billions company, not hundreds.
Structure of the round and “circular financing”
- The $110B is not all cash in hand:
- Amazon: $15B now, $35B contingent on conditions (widely believed to be IPO or hitting some AGI milestone).
- Nvidia and SoftBank: $30B each, paid in installments.
- Commenters describe this as circular: Nvidia and Amazon “invest” and then recoup via GPU sales and cloud spending; effectively trading hardware/credits for equity while juicing each other’s revenue and market caps.
- Debate on whether this is just normal vendor financing and milestone‑based tranching, or a dangerous form of revenue cosplay that magnifies systemic risk.
AGI triggers and contract games
- Several posts note prior reports that large tranches unlock on “AGI” or IPO; people question how AGI is legally defined.
- Cited definitions are financial (e.g., tech capable of $100B profit) rather than philosophical, reinforcing the view that “AGI” is partly a contractual/IPO milestone.
Business model, profitability, and sustainability
- Strong disagreement on sustainability:
- One side: inference is already profitable with high gross margins; training is an upfront bet on future models.
- Other side: models get 10x more expensive to train, prices are heavily subsidized, and commoditization will erase margins.
- Concern that most usage is free or cheap, with unknown conversion to profitable paid usage; ads on ChatGPT are seen as a possible “enshittification” spiral.
Moat, competition, and product quality
- Some argue 800M–1B active users and brand recognition (“ChatGPT” as generic for AI) form a moat.
- Others counter that switching costs are trivial (just change API keys / apps), enterprises default to integrated incumbents (Microsoft, Google), and open or cheaper models (DeepSeek, Qwen, Claude, Gemini) are “good enough.”
- Several developers say Anthropic/Claude or other tools already outperform OpenAI for coding and specific workloads.
Technology shift vs. craze and broader risks
- Many see LLMs as a genuine, internet‑scale technology shift, unlike pure fads; even current models could drive large productivity changes.
- Still, there’s fear this is an overleveraged, system‑wide bet: circular deals, dependence on a few hyperscalers, power and chip constraints, and a perceived push to make LLMs “too big to fail” via national‑security framing.
- Some expect an eventual AI winter or sharp repricing; others think datacenter and energy build‑out will be the lasting legacy even if valuations collapse.