Claude Code: Best practices for agentic coding

Tooling & workflows

  • Many liked the idea of multiple repo checkouts (or git worktrees) so different agents can work in parallel without blocking each other.
  • Typical workflow: multiple terminal sessions, each with its own task, plus project-level docs like CLAUDE.md and AI-generated markdown design notes.
  • Some recommend orchestrators (e.g. “Claude Squad”) to manage worktrees; others prefer lighter tools (aider, Plandex, Goose, Roo/Cline) that let you choose models and control context more explicitly.
  • Several people treat agents as “brilliant but overeager interns”: give them constrained tasks and let git reveal and revert bad changes.

Pricing, cost control & billing

  • Strong frustration that Claude Code is billed separately from Claude Pro/Max web/desktop plans; some felt misled and reduced usage, others argued API-style pricing is necessary given volume and reliability.
  • Costs reported range from ~$0.50–0.75 per task to $35–40/day or ~$200 per feature/PR; some teams spend $100–500/day on LLMs, others find that unimaginable.
  • Suggested cost controls: force narrow file reads, avoid searches and huge outputs, don’t edit files mid-session (to keep prompt cache), limit session length, store context in markdown instead of re-explaining.
  • A number of users say the mental overhead of managing cache and context to save tokens implies a poor UX; others counter that developer time dwarfs token cost and that micro-optimizing use is rarely rational for businesses.

Productivity, quality & role of developers

  • Enthusiasts claim LLM tools can match or exceed team output for boilerplate-heavy work (UI, migrations, scrapers, MVPs), and may compress demand for juniors.
  • Skeptics argue LLMs don’t replicate Staff+ engineers, are still unreliable on “basic” tasks, and risk massive volumes of low-quality code.
  • Several liken fully agentic coding to outsourcing to a large vendor: you still must specify requirements and review carefully; biggest gains come when experts use models interactively in a “cybernetic” loop, not as fully autonomous programmers.

UX, context management & “thinking” modes

  • Claude Code’s /clear, cache behavior, context loss, and lack of easy branching are pain points; workarounds include saving summaries to files for later reload.
  • The “think / think hard / ultrathink / megathink” hidden keywords that change thinking-token budgets were widely noted as amusing but also criticized as an odd, opaque interface; some prefer explicit knobs like /think 32k.
  • Comparisons: Copilot and Cursor are praised for seamless, context-following IDE integration; Aider for precise, file-explicit control; Claude Code for “just working” and deep repo understanding, albeit at higher and less predictable cost.

Ecosystem & competition

  • Many mention Gemini 2.5 Pro as significantly cheaper than Claude 3.7 API, often “good enough” or better for coding; others still strongly prefer Claude’s behavior.
  • There’s concern about every model vendor building its own IDE-level tool, duplicating effort and fragmenting the ecosystem.