Nobody knows how the whole system works
Scope of Understanding vs. AI-Generated Code
- Many agree nobody has ever known the whole system, but historically each component had at least one human expert; concern now is components produced that no one really understands, including the AI that generated them.
- Legacy code and high dev turnover already produce “nobody understands this” situations; AI may accelerate that by normalizing non‑understanding at the very layer you’re paid to own.
Abstractions, Fundamentals, and Education
- Several distinguish healthy abstraction (“you don’t need to know transistor physics to use a CPU ISA”) from ignorance of basics (“you can’t even fry an egg”).
- The key worry isn’t not knowing every layer, but losing the ability or willingness to understand any given layer when needed.
- “Graybeards” report repeated pushback when they try to teach fundamentals (compilers, hardware, low‑level performance), yet see those skills as crucial when abstractions leak.
AI Assistants: Optimism vs. Skepticism
- Optimistic view:
- AI lets engineers work at higher levels; hierarchies and delegation are how all complex human systems function.
- LLMs can quickly explore and document codebases, help with dependency hell, and summarize large systems faster than a new hire could.
- Some workflows record prompts, outputs, and keep specs/Git history updated, using AI as a documentation and refactoring engine.
- Skeptical view:
- AI code lacks intentionality; it “happens to work” rather than being designed for a clear purpose, making reasoning, maintenance, and responsibility harder.
- LLM outputs are non‑deterministic and opaque, unlike compilers and CPUs, which are highly specified, tested, and stable.
- Trust is low: people report subtle bugs, poor test design, and verbose, hard‑to‑review code; reviewing AI output can cost more than writing it.
Responsibility, Interfaces, and Systemic Risk
- Several emphasize a moral and professional duty: you must understand the part of the system you’re responsible for (especially business logic), even if you treat lower layers as black boxes.
- Stable, well‑documented interfaces (CPU ISA, HP‑12C‑like tools) are contrasted with churning, poorly governed ecosystems (Node.js dependency trees, changing libraries); the “nobody understands the system” problem becomes acute when interfaces themselves are unstable.
- Broader analogies (food production, pencils, microprocessors, tax codes, telephony) highlight that modern civilization depends on extreme specialization and partially understood systems; disagreement remains over whether AI will consolidate knowledge (as explainer) or deepen dependence on opaque corporate black boxes.
Proposed Directions and Mitigations
- Suggestions include:
- Using LLMs with explicit practices: persistent histories, “what/why” markdown logs, auto‑updated specs.
- Moving from “code generation” toward DSL‑first systems and controlled business languages that are simpler to reason about and constrain AI slop.
- Treating prompt engineering and system design as the enduring human craft, with AI as a tool rather than an oracle.