Nearly half of Nvidia's revenue comes from four mystery whales each buying $3B+
Debate over an AI bubble
- Many see a “massive AI bubble,” likening it to dot-com, 1990s rail/telecom overbuild, or 80s AI winter: real tech, but overhyped and overfunded, followed by funding collapse and consolidation.
- Others argue this isn’t like pure-speculation bubbles (NFTs/crypto/tulips) because Nvidia and many AI products already generate substantial revenue and real usage.
- Several expect a pop in valuations and capex growth, not in the underlying tech, similar to how the internet thrived after the dot-com crash.
Nvidia, GPU demand, and future glut
- Nvidia’s profits and margins are viewed as “insanely high”; some expect competition or a capex slowdown to compress them.
- Commenters anticipate that any severe shortage will eventually be followed by a glut of used datacenter GPUs (like post-crypto), though quality and reliability of ex-datacenter cards are debated.
- Gamers hope for cheaper GPUs, but many doubt much “trickle down” due to segmentation and higher-margin datacenter priority.
Who are the “mystery whales”?
- Most assume the big four buyers are hyperscalers: Microsoft, Meta, Google, Amazon; some articles cited in-thread say exactly that.
- Other large buyers mentioned: Oracle, CoreWeave, Lambda, Chinese cloud companies, Tesla/xAI.
- Some speculate about indirect government/NSA/DoE demand, but note such purchases would likely be routed through intermediaries.
Custom silicon, CUDA moat, and competition
- Cloud and big-tech efforts: Google TPU, AWS Trainium, Meta MTIA, Microsoft Maia, Tesla D1, plus specialized players (Groq, Cerebras, etc.).
- CUDA is seen as a major moat; AMD/Intel and others have struggled to attract training workloads despite hardware.
- Ideas discussed: CUDA-compatibility layers and open alternatives to gradually weaken Nvidia lock-in; skepticism remains about difficulty and incentives.
Open vs closed and self-hosted AI
- Some companies are commissioning ~$20k in-house AI servers running open-source models, citing flexibility and richer APIs than proprietary services.
- There’s uncertainty whether proprietary “frontier” models or a diverse open-source ecosystem will dominate long term.
Use cases, productivity, and limits
- Reported uses: search/lookup, translation, moderation, coding assistance, writing, design, data analysis, medical and legal document work, recommendation systems.
- Individual experiences vary: some feel AI is a “new power” and increasingly indispensable; others see only modest productivity gains and fading novelty (e.g., image generation).
- Open questions raised:
- Whether LLM quality is plateauing and hallucinations can be tamed enough for broad deployment.
- How big non-LLM markets (robotics, autonomy, scientific computing, drug discovery) will be, and whether they sustain current GPU growth.
Concentration and systemic concerns
- Worry that AI progress and infrastructure are consolidating into a handful of tech giants and clouds; some advocate open source and Linux as partial counterweights.
- Others note that even if this is a bubble, like railroads or dark fiber, the overbuilt infrastructure could still provide long-term economic benefit.