Why OpenAI's $157B valuation misreads AI's future (Oct 2024)

Capital intensity, valuations, and funding risk

  • Several comments frame OpenAI’s CapEx as historically large and paradigm‑shifting, but likely damaging to the broader startup ecosystem by crowding out non‑AI funding.
  • Many expect a “haircut” on AI valuations when revenue/profits underwhelm, comparing this cycle to SoftBank’s failed blitzscaling bets and predicting a possible “AI nuclear winter.”
  • OpenAI’s ~$157B valuation is seen as disconnected from fundamentals: high revenue growth but costs scaling with usage and huge infra plans, with multiples viewed as “crazy” even by big‑tech standards.

DeepSeek’s impact: cost, moat, and credibility

  • DeepSeek is widely cited as evidence that model training can be much cheaper and that OpenAI’s technical moat is weak or nonexistent.
  • Some argue DeepSeek merely stacked known techniques (MoE, fp8, attention compression, PTX optimization, RL) and optimized under harsh constraints; impressive, but not fundamentally new.
  • Others question DeepSeek’s cost claims (omitted pretraining, GPU acquisition, data) and see them as partly geopolitical signaling, but agree inference efficiency is real and verifiable.

Cloud vs edge and the shrinking API margin

  • Many see plummeting training/inference costs and strong open models as bearish for API margins and centralized cloud AI; if high‑quality models run on phones or local clusters, why pay OpenAI?
  • Counterpoint: top models will still outstrip local hardware; enterprises will pay for the very best, at least for some workloads.

Is AI a fad? Lived experience vs skepticism

  • One camp dismisses “AI is a fad,” pointing to concrete productivity gains: coding assistants, game prototypes, customer service, legal/medical workflows, etc.
  • Another is unimpressed by current UX (prompting overhead, undermining personal skills) and uses “self‑driving cars” as a benchmark; they see more hype than transformative value.
  • Multiple engineers report huge productivity boosts (e.g., using Sonnet/Cursor as “power tools” for large codebases), insisting doubters are “holding it wrong.”

Where value will accrue: platforms vs applications

  • Many think foundational models will commoditize; the durable value will be in vertical, workflow‑integrated applications and niche domain models (medicine, logistics, hospitality).
  • There’s debate over whether application‑level moats (data lock‑in, switching costs, personalization) will be strong enough to sustain margins.

Open source and long‑term structure of the market

  • DeepSeek, Llama, etc. are likened to Linux: open ecosystems that eventually overpower proprietary stacks.
  • Some predict that in a few years, nobody will remember “Open”AI, and that human‑AI collaboration on widely available open models will be where the real breakthroughs occur.