How I use Claude Code: Separation of planning and execution

Planning vs. “just code it”

  • Many commenters already use a similar “research → plan → execute” loop and see it as standard Claude/Cursor practice, not radical.
  • Others argue that for experienced developers, extensive planning, prompting, and orchestration can exceed the effort of hand-writing the code, especially for small or medium tasks.
  • Several people note a split in temperament: some find reviewing plans easier than writing code; others find review more mentally draining and prefer to think directly in code.

Artifacts: tickets, specs, and plan docs

  • Variants abound: markdown tickets, design docs with embedded TODOs, multi-layer specs (requirements → architecture → implementation plan), and “project concept lists.”
  • Storing research.md/plan.md (or GitHub issues) in version control is praised as long-term documentation of intent and tradeoffs.
  • Some emphasize keeping a single authoritative spec/plan to avoid conflicting sources of truth.

Effectiveness of AI coding

  • Enthusiasts report large productivity gains: shipping multi-feature apps or complex audit logging in hours instead of days/weeks, while still reviewing every line.
  • Skeptics say LLMs handle boilerplate but struggle with architecture, nontrivial correctness, maintainability, performance, and security; subtle errors and misaligned designs are common.
  • There’s concern that speed-ups often rely on trusting the agent rather than fully understanding its output, which isn’t acceptable in high-responsibility environments.

Prompting, “deeply,” and model behavior

  • A major subthread debates “magic words” like “deeply,” “in great detail,” or emotional framing.
  • Supporters argue these steer attention, increase “thinking”/tool calls, and measurably improve results; others dismiss this as superstition or gambler’s fallacy.
  • Related concepts: model “laziness,” overthinking loops, mixture-of-experts routing, and the tension between probabilistic behavior and engineers’ desire for determinism.

Tools, workflows, and agents

  • Many point out existing systems that formalize plan‑execute cycles: Claude plan mode, Kiro, Antigravity, SpecKit, OpenSpec, superpowers, various custom skills.
  • Multi-agent setups are common: planner → implementer → reviewers (sometimes across different models like Claude, Codex, Gemini).
  • Some prefer small, batched plans rather than “big bang” implementations to limit damage and ease debugging.

Verification, safety, and methodology

  • Strong emphasis from multiple commenters on tests (unit, integration, Playwright), scripts enforcing invariants, and automated checks in CI or git hooks.
  • Regulated/critical domains highlight permission boundaries and least-privilege for agents; full autonomy is seen as risky.
  • Several note that this all resembles classic software engineering: specs, design docs, phased implementation, and iterative review—“waterfall for LLMs” or “agile for agents,” depending on the lens.