AI coding

Perceived vs actual productivity

  • Several comments echo the article’s claim that AI “feels” like a boost but can actually slow developers, referencing the METR study: devs felt ~20% faster but were actually slower due to waiting, prompting, and extra review.
  • Others strongly disagree, especially senior devs with >20–30 years’ experience who report order‑of‑magnitude speedups for common tasks, while admitting they skim intermediate AI output and only deeply review final versions.
  • A recurring theme: most of the time saved is not in “typing code” but in library/API discovery, boilerplate, examples, and quick prototyping.

Where AI coding is working well

  • Widely cited productive uses:
    • “Autocomplete on steroids” in editors (Cursor, Copilot, etc.).
    • Researching unfamiliar concepts, libraries, SDKs, and generating minimal working examples.
    • Boilerplate CRUD, test scaffolding, logging, simple scripts, config files, dashboards, small self‑contained components.
    • Debugging help and log analysis, especially for noisy traces.
    • Brainstorming architectures, refinements, and alternative designs.
  • Many treat AI as a tireless mid‑level or junior dev: good at repetitive work, examples, and refactors under close supervision.

Limitations, risks, and side‑effects

  • Vibe‑coded codebases: several report losing understanding of their own projects, struggling to answer colleagues’ questions, or smelling heavy technical debt from AI‑driven teammates.
  • Non‑determinism and weak specs: English is ambiguous; long prompts drift; agents can “rewrite everything” including tests and specs, causing spec‑drift over time.
  • Poor performance on niche domain logic, novel tasks, large codebases, or long iterative debugging without tight steering.
  • Concerns about diminished critical thinking, “slot‑machine” prompting behavior, and exhaustion from spending all day on hard problems while AI does the “easy” parts.

Impact on learning, juniors, and careers

  • Strong worry that AI will eat the “boring” work that used to train juniors, shrinking the pipeline of future seniors—compared to trades where failing to train apprentices later caused national‑scale skill shortages.
  • Counter‑view: this is similar to past shifts (e.g., higher‑level languages), and skills will just move up a layer (specs, constraints, reasoning about effects).
  • Non‑professionals and late‑career devs describe AI as transformative: it lets them build personal tools or stay productive despite reduced focus, in ways they otherwise couldn’t.

Metaphors and models: compiler, assistant, or something else?

  • The post’s “AI as English compiler” analogy is heavily contested:
    • Critics say compilers are deterministic implementations of formal specs; LLMs are probabilistic code synthesizers plus search over code, guided by tests, types, and CI.
    • Many prefer “junior dev” or “probabilistic synthesizer” metaphors: useful within constraints, dangerous if treated as a magical natural‑language compiler.
  • Several argue that the real value is forcing clearer specifications; English (or structured natural language) may evolve into a higher‑level spec language, but it still needs rigor and constraints.