OpenAI's H1 2025: $4.3B in income, $13.5B in loss
Stock-Based Compensation and Employee Pay
- The reported US$2.5B in stock-based compensation for
3,000 employees ($830k per head for six months) drives a lot of debate. - Several comments explain how private-company equity works: options/RSUs recorded on platforms like Carta, illiquid until IPO/exit or company-arranged secondaries, and mostly an accounting/dilution issue rather than cash outflow.
- Others note OpenAI has repeatedly run employee tender offers and secondary liquidity, so for early staff this “illiquid” stock has already turned into real money.
- Some see this as “spreading the wealth”; others point out it’s still concentrated in a tiny top tier and likely highly skewed toward senior hires.
- High comp is framed as necessary to compete with Meta and others for a very small pool of top AI talent, reviving debates about “10x/50x engineers” and whether training people internally is viable when they can easily be poached.
Revenue, Losses, and Cost Structure
- The big numbers: ~$4.3B revenue vs. $13.5B net loss in H1 2025, with ~$6.7B R&D, ~$2B sales & marketing, ~$2.5B stock comp, and ~$2.5B actual cash burn.
- Several commenters stress that net loss is heavily influenced by non‑cash items (stock comp, remeasurement of convertibles); estimated cash runway is ~3+ years at current burn.
- Others argue the unit economics are still “ugly”: training and inference remain expensive, infra depreciates fast, and older models lose value quickly as capabilities improve.
- Comparisons to Amazon circa 2000 mostly come out unfavorable: Amazon’s worst loss was ~0.5x revenue vs OpenAI at ~3x; Amazon’s infrastructure had multi‑decade life, whereas AI hardware/models are seen as short-lived.
Monetization: Ads, Affiliate, and “Enshittification”
- Many see ads, referrals, and checkout as the obvious path to profitability, essentially turning ChatGPT into a high‑margin ad and commerce platform analogous to Google Search.
- OpenAI is already experimenting with integrated checkout and “merchant fee” affiliate-type revenue; people expect fully-fledged ad products, including sponsored recommendations in answers.
- There is concern that ads will erode trust, blur the line between answers and paid placement, and accelerate “enshittification,” but most concede that for mainstream users ads won’t be a dealbreaker if UX stays convenient.
Competition, Moat, and Bubble Risk
- A recurring theme: there is “no moat in AI” at the model level. Chinese and open-weight models (e.g., DeepSeek, Qwen, GLM) are already in the same rough performance band, some under permissive licenses.
- Counterargument: the real moat is distribution, brand, and productization. ChatGPT has massive consumer mindshare (especially among non‑technical users and teens), plus 700M+ weekly active users and deep integrations.
- Skeptics argue that brand is fragile when switching cost is effectively “pick another chat box,” and Google, Meta, Microsoft already own the major surfaces (search, browser, OS, productivity, social).
- Many see this as a classic bubble: Nvidia and cloud providers are the clear current winners; infra looks like a “money furnace”; datacenter gear depreciates far faster than historic network/rail infrastructure.
- Others say OpenAI can eventually slow frontier R&D, freeze on “good enough” models, let hardware improvements and optimizations drop costs, and then turn on ads and enterprise monetization to become sustainably profitable.