Anthropic expands partnership with Google and Broadcom for next-gen compute
Scaling Laws, Capabilities, and Theoretical Limits
- Some argue current models are generalizable learning machines that will keep improving with more compute, citing neural scaling laws that show no clear plateau.
- Others insist transformers are mostly sophisticated interfaces to recorded data and can’t “learn the universe,” especially if information is not captured in text.
- Counterpoint: models already work on images and other sensor data (e.g., weather forecasting), so limiting them to “text only” is inaccurate.
- A separate line of argument notes hard computational limits (e.g., halting problem, EXPTIME) that no amount of AI scaling can overcome.
Environmental, Resource, and Power Constraints
- A major concern is whether planetary ecology and energy systems can sustain AI’s growth, especially if datacenters consume gigawatt‑scale power comparable to cities.
- Some see AI’s footprint as smaller than other behaviors (e.g., meat consumption), but others stress multiple systemic changes are needed.
- Debate over whether the true bottleneck is power, physical chips at cutting‑edge nodes, or ultimately capital.
- Power is treated as the dominant operational constraint and a proxy for cost; gigawatts are used because FLOPs and token economics are less intuitive.
Revenue, Bubble Talk, and Run-Rate Ambiguity
- Reported jump from ~$19B to ~$30B revenue run rate in a month sparks debate over whether AI is a bubble or an extremely high‑ROI investment.
- Some say bubble status and real value can coexist; others highlight that run-rate numbers can be framed favorably and are not yet public‑market audited.
- Discussion about consistency with a recent court filing citing at least $5B lifetime revenue; several posters show this can align mathematically with very rapid recent growth, but accounting details remain unclear.
Partnerships, TPUs, and Broadcom
- Broadcom’s reputation in software (e.g., VMware licensing) is raised, but others say it’s irrelevant here: Broadcom designs/implements key TPU components, and TSMC fabricates them.
- Consensus: if you want TPUs at scale, you inevitably work with Broadcom; the main strategic issue is securing leading‑edge custom silicon.
Claude Code, Moats, and Access to Compute
- Some question what Claude Code does that open‑source tools couldn’t replicate.
- Proposed “moats”: frontier models (Opus, Sonnet), massive compute access, and ecosystem lock‑in; critics argue none are durable as open models improve.
- Pricing models differ (flat subscription vs per‑token), and coding tools are seen as both product and customer‑acquisition funnel.
- Compute shortage is said to be managed via rate limits, pricing, acceptable‑use restrictions, and possibly quality trade‑offs, rather than closing sign‑ups.