The Unsustainability of Moore's Law
Clarifying Moore’s Law and Scaling
- Multiple comments argue the article conflates Moore’s Law (transistor count/complexity on a chip) with density and with Dennard scaling (power/performance scaling with smaller transistors).
- Some stress that Moore’s Law was always as much economic as technical, with Rock’s “second law” (fab cost doubling ~every 4 years) now a major constraint.
- Others note we’ve been hearing “Moore’s Law is over” for decades, yet transistor counts and features continue to grow, now via chiplets, stacking, and specialized units rather than clockspeed.
Software Performance vs Hardware Advances
- Wirth’s Law (“software is getting slower faster than hardware is getting faster”) is debated.
- One side cites bloat, higher resource use, and the fact that SSDs alone explain much of perceived speed gains.
- The other side points to enormous UX improvements: near-instant boot, multitasking, phones vs early PCs, and argues average real-world performance is far better.
- Some note that “snappy” old systems benefited from tight optimization under severe constraints, suggesting value in modern “Snow Leopard-style” optimization releases.
Optimization and Games
- One view: we’re not bound by Moore’s Law because we just waste compute; examples like God of War 2 on tiny hardware show what tight optimization can achieve.
- Counterpoint: AAA console games still employ dedicated optimization teams; more compute also enables richer features, not just wasted cycles.
- Examples like Doom 2016 running well on very old CPUs are praised, but there’s disagreement on whether such “forbidden magic” is common today.
Future Compute: Smartphones, Cloud, and AI
- Several expect the current pattern to persist: smartphones as primary personal devices, with heavy workloads (especially AI) pushed to datacenters.
- Others note persistent needs for larger screens/keyboards, implying continued demand for PCs and laptops, though often as “fat thin clients” (Chromebooks, etc.).
- AI is framed by some as the long-awaited “killer app” that justifies more RAM, NPUs/GPUs, and new process nodes into the 2030s; others counter that many users dislike AI features and see them as unnecessary.
Economics, Fabs, and Industry Concentration
- The rising cost of fabs and tools (ASML, EUV, etc.) is widely seen as the true limiter: fewer companies can afford leading-edge nodes, pushing joint ventures and deep supplier–foundry integration.
- Some worry about systemic risk if a major fab is lost, though others dismiss this as exaggerated.
Physical and Architectural Limits
- Discussion touches Landauer’s principle, ultimate finite-resource limits, and whether we’ll hit economic limits before physical ones.
- Techniques mentioned: more parallelism, larger caches (with economic tradeoffs), chip stacking, multi-reticle stitching, and GAA/nanosheet transistors where channels become effectively intrinsic.
- There’s mild speculation about alternative paradigms (neuromorphic, adiabatic/reversible logic, atomic-scale fabrication), but consensus that none are yet viable at scale.
Device Longevity and Obsolescence
- One thread disputes claims that modern GPUs only last 3–7 years; anecdotal evidence from mining and long-lived consumer cards contradicts this.
- Several point out that software/OS de-support (especially on proprietary platforms) often kills devices long before hardware failure; open OSes can extend life substantially.