OpenAI probably can't make ends meet. That's where you come in

Google vs OpenAI economics and infrastructure

  • Commenters broadly agree Google can easily fund Gemini with ad revenue, while OpenAI cannot self-fund at similar scale.
  • Google is described as far ahead in datacenter maturity (custom TPUs, cooling, networking) and chip co-design, giving it cheaper inference/training and a long runway for “moonshots.”
  • Some argue Google’s strategic goal is to commoditize AI models as a complement to its core businesses, not to monetize Gemini directly.
  • Others note ChatGPT has partially displaced Google Search for some users, but point out it has not done so profitably.

OpenAI’s business model, losses, and scale

  • Strong disagreement over whether high usage (e.g., ~800M WAU) implies a good business. Critics stress that if each query loses money, growth worsens the problem.
  • Several comments emphasize the unprecedented scale of OpenAI’s obligations: speculative references to trillions in infrastructure spending that ordinary VC funding cannot cover.
  • Many see no clear, credible path to profits given current pricing and costs; “money-losing” is distinguished from “losing” in the colloquial sense.
  • Some argue current generative AI is not the real AGI “moonshot,” but the compute and photonics infrastructure could be.

Loan guarantees, bailouts, and systemic risk

  • Central concern: OpenAI and partners are seeking U.S. government loan guarantees for massive AI buildout, effectively socializing risk while privatizing gains.
  • Comparisons are made to 2008 bank bailouts and auto bailouts; critics see “socialism for corporations, capitalism for workers.”
  • Others clarify that guarantees are not direct cash bailouts and would fund datacenters, chips, and power that might remain useful even if the bubble pops—but highlight opportunity costs versus social spending or other infrastructure.
  • Some fear AI firms are trying to become “too big to fail” by entangling Nvidia, cloud providers, and the stock market.

Politics, propaganda, and public interest

  • Several comments fear AI will follow the standard “enshittification” pattern once captured by powerful political actors.
  • LLMs are framed as the endgame tool for advertising and propaganda; concerns are raised about how political narratives (e.g., Jan 6) might be shaped if firms depend on government support.
  • A minority hopes competition from China or other actors will drive cheaper, more efficient AI instead of subsidized U.S. incumbents.