OpenAI builds first chip with Broadcom and TSMC, scales back foundry ambition

Broadcom partnership and reputation

  • Some see working with Broadcom as another warning sign, citing its private‑equity style acquisitions (e.g., VMware, CA) and reputation for value extraction and customer pain.
  • Others counter that Broadcom spends heavily on R&D, has strong IP, and is a top-tier ASIC / networking silicon vendor used by Google, Meta, Apple, etc.
  • There’s confusion between “Broadcom the serious chip company” and “Broadcom the PE holding/raiding vehicle”; both views coexist.
  • Several argue Broadcom is an obvious partner for custom AI ASICs given its experience with TPUs and xPUs.

Abandoned $7T fab scheme and Altman’s ambitions

  • The thread fixates on the reported idea of raising ~$7T to build ~36 AI fabs, calling it “insane,” “beyond belief,” and outside any realistic capital market.
  • Comparisons are made to US GDP, national debt, total US investment, and defense budgets to underline the scale.
  • Some suggest it was PR/anchoring or FOMO marketing rather than a serious plan; others see it as evidence of grandiosity and detachment from reality.
  • A minority notes later clarification that the figure was meant as collective, eventual global compute investment, not a single-company raise, but many remain skeptical.

Custom chips, TSMC, and timelines

  • Consensus: building fabs is infeasible; designing custom chips with TSMC is the realistic “scaled‑back” path.
  • Estimates for getting a first usable chip into production range from ~2 to 4 years, with the first generation likely not production‑worthy.
  • Hardware is described as slow, finicky, and extremely expensive; software stacks (e.g., AMD’s) are highlighted as a major bottleneck.
  • Samsung is viewed as 2–3 years behind TSMC; packaging and HBM are seen as AI chip bottlenecks.

Nvidia dependence and diversification

  • Demand for Nvidia GPUs is described as “insane” with rumored supply/yield issues.
  • Large buyers are said to be diversifying into AMD, custom accelerators, and alternative clouds; some companies frame non‑Nvidia use as “strategic diversification,” even when it’s really lack of Nvidia access.
  • There’s curiosity about how this squares with OpenAI’s tight Azure/Microsoft integration and exclusivity.

OpenAI’s economics and moat

  • Several posts highlight massive projected losses and heavy reliance on subsidized Microsoft GPU pricing.
  • Some argue OpenAI’s moat is thin beyond brand and first‑mover advantage; custom chips could be an attempt to build a cost/moat advantage.
  • Others compare foundation-model providers to airlines: capital‑intensive, commoditized, low-margin once models converge.

AGI/ASI, singularity, and LLM capabilities

  • Strong disagreement over whether current models qualify as AGI:
    • One camp claims ChatGPT is already “artificial general intelligence” (but not superintelligence), arguing “general” ≠ “superhuman.”
    • Opponents call this delusional, insisting AGI must at least match average human performance across broad tasks.
  • Distinction between AGI (human-level generality) and ASI (superintelligence) is repeatedly emphasized; many complain these are conflated in public discourse.
  • Skeptics doubt the “singularity” narrative, likening it to a tech‑rapture; others say it deserves serious consideration but timelines are unknowable.
  • Debate continues over whether LLMs do genuine reasoning vs pattern‑matching “reasoning steps” from training data; cited research supports both sides.
  • Some argue that if ASI is real, any finite investment like $7T is either absurdly high (if ASI is impossible) or trivially low (if it’s inevitable).

Real‑world impact and adoption of LLMs

  • Views diverge on societal impact:
    • Some say aside from tools like code copilots, they see little broad cultural change and few non‑tech users with sustained usage.
    • Others report widespread quiet adoption among students and non‑technical knowledge workers (marketing, accounting, small business, politics) for drafting, summarizing, and analysis.
  • LLMs are compared to calculators for cognitive offloading; concerns are raised about “nerfing” some mental skills but seen as a reasonable tradeoff.
  • Education is described as heavily affected, especially essay/homework integrity.
  • In enterprises and government, LLMs are said to be creeping into communications (internal emails, summaries) and customer support, often behind the scenes.
  • Some note that consumer‑facing use often looks like “slop” generation, spam, or work‑feigning.
  • Overall consensus: very useful for search/summarization/writing, clearly not a magic AGI/ASI singularity engine.

Meta‑discussion and tone

  • Many criticize hype, “fake it till you make it” startup culture, and grandiose pitches (e.g., trillion‑dollar asks, wars-over-AI rhetoric).
  • Others argue that oversized ambition helped accelerate progress (e.g., rapid arrival and refinement of ChatGPT‑like systems), even if some plans were unrealistic.