We're not innovating, we're just forgetting slower
Reliability, Complexity, and Repairability
- Several commenters want a rigorous way to measure product reliability over time, rather than relying on nostalgia.
- Anecdotes conflict: modern car ignitions and consoles are seen as vastly more reliable than older ones; phones, routers, web UIs, and “smart” devices feel flakier, slower, and harder to debug or repair.
- Older hardware (VHS players, 8‑bit machines) was often repairable with manuals and tools; today’s SoCs and sealed devices are cheap and disposable. Some see this as planned obsolescence; others as a rational outcome of lower hardware cost and higher complexity.
Abstractions, Specialization, and “Real Engineers”
- One camp argues software quality is declining: endless abstraction layers, misused tools (CMake, Docker, npm), 6GB containers for trivial tasks, bloated HTML emails, etc.
- Opponents say “nobody knows everything” has always been true: civil engineers don’t smelt steel, mechanics don’t refine ore. Division of labor and specialization underlie modern prosperity.
- A middle view: depth across a few layers (e.g., OS + DB, or frontend + browser internals) makes engineers much better, but demanding everyone know op-amps, assembly, and Kubernetes is unrealistic.
Dependencies, Overengineering, and Accidental Complexity
- Software stacks are compared to ultra-processed food: an explosion of tiny packages and services that are costly, fragile, and often unnecessary.
- Some call cloud-native stacks (containers, Kubernetes, serverless) “accidental complexity”; others note these solve real deployment and scalability problems when used appropriately.
- Physical-world analogies split the thread: some say everything from bridges to pencils already depends on vast supply chains; others reply that hardware has stable standards (screws, voltages) while software keeps reinventing incompatible layers.
AI, LLMs, and Skill Erosion
- The article’s “stochastic parrot” framing of LLMs is challenged: commenters explain how next-token training can still yield genuine capabilities (e.g., arithmetic, code synthesis).
- Concern: over-reliance on LLMs and high-level tools may atrophy understanding; people may accept plausible but wrong outputs and lose the habit of deep reading and verification.
Opacity of Modern Systems and “Forgetting”
- Criticism of UIs and systems that hide diagnostics (“something went wrong”), producing cliff-edge failures that are hard to troubleshoot.
- Some see a broader pattern: we repeatedly rediscover old ideas (time-sharing vs serverless, distributed systems vs “edge”) without clear collective memory of prior art, which they argue is closer to “forgetting” than genuine innovation.