OpenAI's fall from grace as investors race to Anthropic

Valuations, funding, and secondary market

  • Several commenters highlight that secondary demand for OpenAI shares appears weak, with claims that large blocks can’t find buyers, seen as a bad sign for an IPO.
  • The valuation gap cited in the article ($852B vs. $380B) is viewed by some as investors “rotating” into Anthropic as the cheaper big bet.
  • Others emphasize this says more about herd behavior and FOMO than clear fundamentals.
  • There is debate about whether OpenAI’s huge “raises” are real cash today or mostly forward commitments, SPVs, and complex financing structures.

OpenAI strategy, leadership, and trust

  • Many argue OpenAI squandered an early lead through hubris, slow iteration on core products, and scattered strategy (chasing AGI more than clear business lines).
  • Leadership is frequently criticized as inconsistent, overly media-focused, and untrustworthy; the board firing episode is cited as an early red flag.
  • Some still think OpenAI has the best overall model/API and note it is cheaper for some workloads, but see only marginal technical advantage.

Anthropic’s positioning and perception

  • Anthropic is perceived as more focused (enterprise and coding/agents) and more disciplined about a path to revenue.
  • Its leadership is described as more straightforward about AI’s disruptive potential, though others see this as self‑serving hype.
  • Several commenters question the “good guys” branding, arguing Anthropic ultimately behaves like any profit-maximizing frontier lab.

Model quality, tools, and developer experience

  • Developers report mixed experiences: some strongly prefer Claude Code; others say OpenAI’s Codex now matches or exceeds it, especially for large, complex codebases.
  • Mindshare is seen as volatile: last year ChatGPT was the default, this year many say Claude/Claude Code is the new hotness, but easily reversible.
  • Some users report recent quality drops and tight rate limits at both companies, causing switching between providers with little friction.

Competition: Big tech, China, and local models

  • Google/Gemini is seen by some as an under‑marketed “dark horse,” especially where it’s already embedded in Workspace or Copilot‑style enterprise stacks.
  • Chinese models (e.g., Qwen, DeepSeek) are repeatedly cited as “good enough” at much lower cost, especially when used with good tooling.
  • Several note that if local or open‑weight models handle 80–90% of current SaaS use cases, large centralized labs could be in serious trouble.

Economics, moats, and sustainability

  • Many argue both OpenAI and Anthropic share the same core problems: weak moats (easy switching), bad unit economics, and massive capex obligations.
  • A counterpoint is that at high utilization, their compute costs sit well below prices, so the game is driving enough demand to keep GPUs busy.
  • There is skepticism that any frontier lab will be truly profitable this decade, and that current valuations (hundreds of billions) are impossible to justify on fundamentals.

Overall sentiment

  • Tone is sharply divided: excitement about Anthropic’s recent traction and tools, but broad skepticism about all frontier labs’ ethics, narratives, and valuations.
  • Many expect a correction once IPOs, earnings pressure, and the rise of cheaper/local alternatives collide with current hype.