Embracing the parallel coding agent lifestyle
Planning, Specs, and Review as the Real Bottleneck
- Several commenters echo that the hard part is planning and specification: defining what you want, constraints, ordering steps, and verification.
- Once the work is well-specified, reviewing either human- or AI-written code is much less cognitively taxing.
- However, AI-generated code still “feels” like reviewing code from a brand-new coworker every time; you can’t rely on known habits, so reviews stay expensive.
- Some maintain conventions/spec files in-context and rely heavily on tests and linting to keep AI output within acceptable patterns.
Parallel Agents: Value vs Burnout
- Many find parallel agents useful for “grunt work” and research: killing warnings, identifying impacted files, UI automation, or exploring designs they may discard.
- Async/“background” agents are praised: fire and forget, review later, like delegating to quiet teammates.
- But several are unconvinced a sustainable, burnout-free parallel workflow is possible while every change needs supervision.
- Fast feedback loops can create an illusion of progress and push people into hectic round-robin reviewing. Some explicitly limit LLM use to low-energy hours or move agents out of the IDE to reduce distraction.
Tooling, Containers, and Git Workflows
- There’s substantial discussion of infrastructure: worktrees vs shallow clones per container, security implications of sharing
.git, and merge-hell risk. - Multiple orchestration tools are mentioned (Rover, Sculptor, Conductor, Crystal, Codex Cloud, Copilot CLI, toolkami), often wrapping:
- per-agent containers
- isolated checkouts/worktrees
- centralized dashboards for many agents
- Visual and session management is non-trivial: people use iTerm2 tricks, tmux, tiling WMs, Stage Manager, and want better ways to track many agents and environments.
Latency, Anthropomorphizing, and “Agents”
- Some argue we only talk about “agents” because coding LLMs are still slow; if responses were near-instant, we’d treat them more like tools than coworkers.
- Others note that even with fast models, human review, test execution, and web latencies will remain primary bottlenecks.
Quality, Learning, and the “Super‑Manager” Debate
- One camp sees AI turning devs into hybrid manager–ICs: coordinating multiple agents, comparing diffs from several solutions, and focusing on review/feedback.
- Critics argue this is overblown and risks short‑circuiting the learning that comes from personally doing the edit–compile–test cycle and building deep mental models of systems.
- There’s broad agreement that without strong review discipline, parallel agents make it easier to ship “slop” faster.
Broader Skepticism and Philosophy
- Some compare current LLM use to alchemy or early nuclear research—powerful but poorly understood, with “cookbooks about cookbooks” while the field searches for stable abstractions.
- Others claim the industry is deep in sunk-cost territory: modest productivity gains don’t justify the immense capital spent.
- Debate continues over whether LLMs are “stochastic parrots” vs capable of genuine new reasoning; prompt cargo culting and even “bullying” models in system prompts are viewed with mixed amusement and concern.