Writing Code Was Never the Bottleneck

Was Code Ever the Real Bottleneck?

  • Many agree with the article: in professional software, bottlenecks are specs, requirements, domain understanding, coordination, and decisions—not typing code.
  • Code review, debugging, testing, and cross-team communication dominate time, especially in large orgs with meetings, tickets, and process overhead.
  • Some push back: for solo devs, small startups, and side projects, writing code often is the constraint; LLMs unlock many ideas that previously died for lack of time.

Where LLMs Clearly Help

  • Fast generation of boilerplate, CRUD, glue code, small tools, one-off scripts, and UI/CSS; big win for “unimportant but necessary” work.
  • Non-coders (or light coders) can now build small but real apps (e.g., domain-specific tools) that would have been out of reach.
  • Strong developers report major gains when using LLMs as:
    • Advanced autocomplete.
    • Code search/summarization and “active rubber duck” for unfamiliar code.
    • Test generator and integration-test assistant.

Where LLMs Make Things Worse

  • Juniors using LLMs produce far more code with far less understanding, leading to:
    • Subtle, non-obvious bugs in code that “looks polished”.
    • Larger, more complex solutions than needed.
    • PRs that shift direction completely between review rounds.
  • Senior engineers report “effort inversion”: reviewing AI-boosted junior PRs takes more time than writing the feature themselves.
  • Testing and review quality often collapse when authors don’t understand the implementation; they can’t design good tests or reason about edge cases.

Code Review, Reading, and Maintainability

  • Reading and understanding code was already dominant; LLMs increase code volume and thereby review load.
  • Existing review practices (quick sanity checks) don’t scale to AI-generated, high-volume, low-understanding contributions.
  • Suggested mitigations: require design/spec docs, enforce test quality, demand that authors explain changes, and use LLMs to assist review rather than replace it.

Business Incentives and Long-Term Effects

  • Many expect a flood of “good enough” but brittle software: cheap to create, expensive to maintain.
  • High-quality, human-crafted code will persist but be rarer and more expensive.
  • Key open question: can LLMs eventually also reduce the real bottlenecks—spec quality, architectural decisions, and shared understanding—or will they mainly accelerate the production of technical debt?