Why agents are bad pair programmers
Flow, Distraction & “Deep Work”
- Many commenters say inline AI autocomplete and aggressive agents destroy focus: constant suggestions interrupt mental flow and push out the solution they were about to type.
- Others report the opposite: with subtle or on-demand setups, AI enhances deep work—especially when configured not to act unless asked.
- Several people maintain two environments (AI-enabled and AI-free) and switch depending on task. Some disable autocomplete entirely but keep “agent” tools for boilerplate or one-off scripts.
Autocomplete vs Agents
- Strong split:
- Some hate AI autocomplete, especially in strongly typed languages where IDE suggestions are already precise; they prefer agents that operate in larger, explicit chunks and can run tests.
- Others love autocomplete (especially in verbose languages like Go) for loops, logging, and boilerplate, as long as suggestions are short and fast to scan.
- Editor UX matters: subtle modes, ask/plan modes, and “watch”/terminal flows that don’t touch files unless told are praised; tools that apply big diffs or overwrite manual tweaks mid-stream are heavily criticized.
Code Quality, Trust & Maintainability
- Many see agents as “idiot savant” coders: fast and decent at CRUD, scaffolding, SQL/queries, but poor at architecture, decisions, and edge cases.
- Review burden is high: large, overconfident diffs; excessive comments; occasional wild changes (e.g., hundreds of imports, collapsing OO hierarchies into if/else chains).
- Several conclude AI-generated code is fine when they don’t care about long-term maintainability (one-off tools, leaf functions), but not for core code others must live with.
Prompting, Planning & Control
- A recurring theme: success is extremely prompt- and workflow-dependent.
- Suggested patterns:
- Use “plan first, then apply” workflows; iterate on a design doc or TODO before any edits.
- Constrain scope (small tasks, clear files, style rules) and keep project-specific prompt documents the agent always reads.
- Turn-taking flows (commit per change, easy undo) reduce thrash.
- Some complain that more planning detail can confuse current models; others show elaborate prompt regimes working well for them.
Use Cases, Limits & Meta-Pairing
- Common positive uses: reference lookups, scaffolding, tests, debugging probes, documentation, English/spec writing.
- Negative patterns: agents that don’t ask clarifying questions, rarely push back, or change behavior unpredictably run-to-run.
- Several note that the article’s critique also mirrors why human pair programming often fails: mismatched pacing, one side dominating, and not enough explicit back-and-forth.