AI coding tools can reduce productivity

Perceived usefulness across domains

  • Many report huge gains (often 10–20x subjectively) for frontend, CRUD, boilerplate, new libraries/frameworks, and greenfield work.
  • Some see the opposite: LLMs more helpful for low-level C/Rust/system-style tasks (data structures, parsers) than for web UIs.
  • Common pattern: LLMs shine in domains where the developer is less expert and tasks are “standardized”; far less so for mature, novel, or niche codebases.
  • LLMs are often used as a better Stack Overflow / doc search, or to avoid wrestling with bad documentation.

Experiences with tools and workflows

  • Many prefer “smart autocomplete” (e.g., Copilot-style inline suggestions) over agentic tools; they trust snippet-by-snippet use and still review everything.
  • Agentic workflows (Cursor, Claude Code, etc.) can feel like a slot machine: big wins sometimes, but also long unproductive loops and “one more prompt” traps.
  • Some describe elaborate processes: planning with the model, turning work into “cards,” using multiple models in tandem, or maintaining an AGENTS.md with custom instructions.
  • Others report outright failures (e.g., migrations, Pub/Sub integrations) where doing it “by hand” later was faster and more reliable.

Code quality and technical debt

  • AI-produced code is often 30–50% longer, more repetitive, and less abstracted; several see this as pure technical debt rather than better structure.
  • Review capacity rarely scales with the increased output; many doubt that the extra code is being adequately reviewed.
  • Some argue all code is technical debt and that more code almost always means more bugs; others counter that overly terse code is also costly.
  • Frontend specialists complain that AI-generated UI code can be non-functional, ugly, and hard to maintain, especially when used by backend devs under management pressure to “use AI.”

Study methodology and interpretation

  • The referenced study: 16 experienced OSS maintainers, 246 issues, tasks randomly assigned with/without AI; developers estimated times up front.
  • Result: AI-allowed tasks took ~19% longer than estimated; no-AI tasks finished ~20% faster than estimated; participants nonetheless felt faster with AI.
  • Some argue 16 people is too few; others note 246 tasks is statistically meaningful but may not generalize beyond this population and kind of work.
  • Critics question task selection (real OSS issues vs everyday corporate tickets), possible noise from different tasks per condition, and whether this captures AI’s biggest sweet spots (e.g., standardized internal tooling).

Measuring developer productivity

  • Long, unresolved debate: no consensus metric for individual programmer productivity; salary, lines of code, tickets closed, or business outcomes are all flawed and gameable.
  • Analogies to doctors/teachers: outcome metrics exist but heavily distorted by incentives (Goodhart’s law).
  • Several suggest controlled experiments (same tasks, with/without AI) as more meaningful than self-perception; others point out that most real-world work isn’t easily standardized.

Learning, skills, and human factors

  • Concern: AI reduces time spent researching and thinking, encouraging shallow understanding and hindering long-term skill development, especially in frontend and juniors.
  • Some use AI explicitly as a “challenger” or rubber duck: to expose gaps in understanding, not to replace their own design and reasoning.
  • Others value the “meditative” aspect of manual coding and deliberately avoid automating it away.

Hype, trajectory, and adoption

  • Some see current limitations as a “trough of disillusionment” and expect rapid monthly improvements to make today’s productivity questions moot.
  • Skeptics counter that many past “revolutionary” technologies plateaued; they want concrete current benefits, not promises.
  • Several note that non-expert stakeholders are easily impressed by AI output but can’t judge maintainability, risking a flood of mediocre code that future humans (and AIs) must untangle.