Agentic Coding Recommendations
State of Agentic Coding Tools
- Claude Code is seen as the current benchmark; several note there’s no equally good open‑source/local alternative yet.
- Aider is the main “almost there” OSS tool: strong for editor‑agnostic pair‑programming, but weaker at autonomous exploration, tool calling, and self‑prompting.
- Other emerging options: OpenCode, Cursor/Devin‑style cloud agents, CodeCompanion for Neovim, JetBrains tools, Roo Code, Amazon Q, and various browser/CLI agents built on OpenAI‑compatible APIs.
- Tool/mcp integration and robust tool-calling are recurring pain points; several projects have open PRs or early MCP support.
Cost and Usage Patterns
- Claude’s flat‑rate plans are contrasted with per‑token tools like Aider; some find Aider cheaper for modest usage, others see Claude as a discount compared to raw API use.
- Many report total monthly AI spend under $20, especially when controlling context size and using cheaper models (e.g., DeepSeek off‑peak).
Effectiveness and Limitations
- Strong praise for agents on:
- One‑off scripts and boilerplate
- Fixing large batches of type or lint errors
- Small, well‑scoped features and refactors
- Weaknesses frequently cited:
- Complex refactors, performance work, large/legacy codebases
- Reliability with Rust, big changes, or fully “yolo” autonomous runs
- Hallucinations in API/library selection and product research
- Some report transformative productivity; others say agents are net negative beyond demos and small toys.
Impact on Code Style and Quality
- Many note convergence between “AI‑friendly” code and good human‑friendly engineering: simple structure, clear interfaces, few dependencies, strong tests, good error messages.
- Debate over AI code quality: widely described as junior–intermediate level; proponents argue you must treat it like a junior dev (style guides, design docs, reviews) or encode that into agent prompts.
Language and Stack Choices
- Go, PHP, JS/React/Tailwind, and Ruby/Rails are often reported as working especially well due to stable APIs, rich training data, and good tooling.
- Typed languages with strong compilers (Rust, TypeScript, Go) help agents via error‑driven correction, though some see overcomplicated Rust output.
- Others have success with Elixir/Phoenix, Clojure, and even Common Lisp when agents can access REPLs, docs, or project‑specific tools.
Ecosystem and Future of Languages
- Concern: agents may entrench current “simple, popular” stacks and make new languages/frameworks harder to adopt.
- Counter‑arguments:
- Agents can learn new stacks via context docs, synthetic data, and tool‑based exploration.
- Future frameworks may be intentionally “AI‑friendly” and ship with AI‑oriented docs and tools.
- Broader speculation ranges from “languages become assembly for agents” to new languages designed primarily for LLM consumption.
Workflows, Prompting, and Project Setup
- Recommended practices:
- Separate planning from coding; often using stronger models for design.
- Have agents write design/requirements docs and explicit checklists before edits.
- Maintain an AI conventions/AGENTS.md file as an onboarding doc for agents.
- Use containers and isolated dev environments (e.g., container-use) to run agents safely and in parallel.
- Carefully manage context (add/drop files, smaller windows) to save cost and improve focus.
Attitudes, Skepticism, and Ethics
- Some describe agents as “senior dev with many eager juniors,” shifting their focus to review, validation, and architecture.
- Others find reviewing AI‑generated patches as time‑consuming as writing code themselves, and remain unconvinced the trade‑off is worth it.
- There is worry about over‑reliance (“giving up reasoning”), job displacement, and massive corporate incentives driving hype.
- Several call for more concrete, reproducible examples (repos, streams, diffs) rather than vague claims of productivity gains.