Nvidia Rides AI Wave to Pass Apple as Largest Company
Apple vs. Nvidia Business Durability
- Some argue Apple has a diversified ecosystem (iPhone, AirPods, Watch, services, health, VR R&D), while Nvidia is heavily concentrated in datacenter GPUs.
- Others counter that Apple is also highly dependent on the iPhone; many “diversified” lines are essentially iPhone accessories or iPhone-driven services.
- Debate over R&D: cited numbers show Apple spends tens of billions, but some say R&D accounting is distorted by tax incentives and that much of it is just engineering salaries.
- A thread explores how tech giants have “walled gardens” and ignore power users, potentially stifling innovation, versus serving a non-existent “average user.”
Nvidia’s Moat: GPUs, CUDA, and Competition
- One side: “a GPU is all you need” if you’re one of very few suppliers; Nvidia dominates AI GPUs and has a massive CUDA ecosystem and related libraries for dataframes, compression, vector search, etc.
- Counterpoint: CUDA isn’t an unassailable moat; ROCm exists, big tech is funding open alternatives, and ASICs for stable NN architectures could undercut Nvidia on cost and performance.
- Some emphasize Nvidia’s moat is strongest in training, especially for novel architectures; others say that moat may narrow over time.
- Concern that Nvidia is currently propelled mainly by H100 sales to big tech; if AI demand normalizes, Nvidia could revert toward being “just” a gaming GPU company.
Valuation, Market Cap, and Risk
- Disagreement over the significance of Nvidia’s market cap: some call market cap the key metric for investors (market-cap delta = ROI); others argue profit, growth, and ratios like P/E matter more.
- Critics say market cap is “made up” if there are no real buyers at the quoted price; defenders respond that Nvidia is extremely liquid, so its cap is meaningful.
- Skepticism that current valuation (high P/E) is sustainable unless AI markets become enormous and Nvidia stays very profitable.
AI Hype, Productivity, and Real Use Cases
- Multiple comments suggest we’re approaching the “trough of disillusionment”: huge AI spend but no clear aggregate productivity gains yet.
- Reported downsides: better phishing/scams, degraded search and web content, AI-written homework and papers of questionable quality, and more social media bots.
- Others argue transformative technologies often take many years to show productivity impact; it’s too early to judge LLMs.
- One concrete positive example: document recognition and processing (contracts, invoices, etc.) has become dramatically cheaper and faster to deploy using LLMs, reducing dependency on scarce ML experts.
- There’s also mention of long-standing GPU use in HPC (weather, simulation, remote sensing), which benefits from CUDA and may grow steadily, though some doubt this justifies “most valuable company” status.
Tariffs, Geopolitics, and Competition
- Questions about how future tariffs might affect AI clusters and Nvidia’s bottom line; possibilities include hosting overseas or passing costs to customers.
- Some argue tariffs function like a tax and won’t change much for software-heavy businesses; others note likely retaliation against US tech giants.
- Huawei is highlighted as an emerging competitor: strong in Chinese smartphones and forced to develop its own high-end chips after being cut off from Nvidia.
GPU History and AI Hardware Pricing
- Side debate on who “invented” the GPU: Sony’s PlayStation chip, earlier 2D/3D accelerators, or Nvidia’s GeForce 256; consensus is that definitions are fuzzy and evolve over time.
- Complaints that Nvidia is “overpriced” and using VRAM capacity to price-gouge; some users consider alternatives like upcoming Apple M4 Ultra systems or tinygrad-based multi-GPU rigs.
- A broader skeptical view holds that LLM-centric AI is a bubble, much of it offering “BS generation” and overlapping services; as on-device AI improves, demand for massive GPU clusters could fall, compressing Nvidia’s valuation.