OpenAI completes deal that values company at $157B
Valuation, Returns, and Risk
- Many see the $157B valuation as requiring Google‑/Meta‑scale outcomes; investors in a late-stage round likely target lower multiples than early VCs, but still need large upside.
- Some argue that if any AI company dominates, trillion‑dollar market caps are plausible; others doubt OpenAI will be that winner.
- Comparisons are made to Tesla, Uber, WeWork, Theranos, and Facebook: huge hype cycles can resolve into either dominant businesses or spectacular failures.
- Concern that OpenAI reportedly burns billions per year and this raise may buy only ~1–2 years of runway; continued need for massive training spend is seen as structurally risky.
Business Model, Revenue, and Profitability
- Reported revenue run rate around several billion per year, but still heavy net losses; some question whether they even profit on Plus subscriptions and API inference.
- Debate over whether model training is “capex” vs “opex”; one view is training is a consumable cost since models become obsolete quickly.
- Skepticism that they can 3–10× revenue repeatedly while maintaining margins, given fierce price competition and expensive compute.
Moat and Competition
- Strong disagreement over whether OpenAI has a moat.
- Claimed moats: brand recognition (ChatGPT), first‑mover advantage, integrations (Windows, mobile OS), scale in GPUs/servers, speed of iteration, and enterprise relationships.
- Counterpoints: competitors (Anthropic, Google, Meta, Nvidia, open‑weights models) are close in quality; LLMs increasingly look commoditized and swappable in many apps.
- Some think any lead is only “a few months”; open-source and cheap proprietary models erode differentiation.
Technology and Product Quality
- Mixed views on model superiority:
- Some say o1/o1‑preview is clearly ahead in reasoning and coding; others find only modest gains over GPT‑4o and prefer Claude or other models on price/performance and usability.
- Reports of quirks (e.g., language switching, verbosity) and suggestions that similar reasoning can be approximated by structured prompting with older models.
- Several commenters feel progress is slowing (logistic curve), prompting OpenAI’s shift toward inference‑time computation and productization.
AGI / Superintelligence Debate
- Long subthread on whether AGI already exists, how to define “intelligence,” and distinctions between AGI and ASI.
- Some claim current models meet a broad definition of AGI (general problem-solving); others insist OpenAI’s own AGI definition (outperform humans at most economically valuable work) is far from met.
- Discussion of power-law dynamics: if anyone achieves strong AGI/ASI, returns and control might be extreme, but many doubt a single permanent winner.
Infrastructure, Microsoft, and Costs
- OpenAI is deeply dependent on Microsoft/Azure for compute; this is seen by some as a moat (scale, relationship) and by others as a vulnerability (no owned datacenters, custom silicon).
- Debate over whether building their own datacenters would materially lower costs, given existing Azure discounts and capex/time requirements.
Future Monetization and “Enshittification”
- Expectation that to justify valuation, OpenAI may:
- Raise prices (including high-end enterprise tiers),
- Introduce ads or sponsored outputs, and
- Degrade free tiers (lower quality, more constraints).
- Some claim ad‑like behavior is already being tested; others worry that unreliable outputs make ad placement tricky.
- Fear that current “golden era” of generous, high‑quality service will give way to enshittification as revenue pressure mounts.
Apple, Other Investors, and Governance
- Noted that Apple reportedly walked away from participating; reasons speculated include valuation and Apple’s conservative style or internal LLM efforts.
- Presence of certain investors (large sovereign funds, SoftBank) triggers skepticism among some; others defend leading VC firms in the round as highly sophisticated, not “dumb money.”
- Concern about governance: the shift from non‑profit to for‑profit is seen by some AI researchers as a betrayal that could hurt talent attraction.
Open Source and Local Models
- Many emphasize rapid improvement of open‑weight models (e.g., Llama) and local‑hardware inference.
- View that in 5–10 years, GPT‑4‑class models may run locally on mainstream devices, making generic text LLMs a cheap commodity and pushing value capture to integrated platforms (OS, productivity tools).
- Others counter that subtle behavioral differences, safety tuning, speed, and surrounding tooling still make frontier proprietary models non‑interchangeable.
Hype, Ethics, and Marketing
- Accusations that OpenAI’s leadership uses exaggerated rhetoric (e.g., claims about “high‑school” or “PhD‑level” intelligence, imminent H.E.R‑like assistants) to fuel valuation.
- Some see the company as “snake oil + real research”: undeniably impactful technology paired with overblown promises.
- Broader worries that centralized, closed models trained on user inputs create a power imbalance and “economic self‑harm” for knowledge workers, while others argue adoption is rational and inevitable.