The role of developer skills in agentic coding
Human-in-the-loop vs autonomous agents
- Strong consensus that “supervised agents” work far better than fully agentic “build this whole feature/app” approaches.
- Many describe AI as an IDE‑integrated writing assistant and rubber duck: discuss design in prose, iterate on small code snippets, then integrate by hand.
- Broad, high‑level goals given to agents tend to require so much babysitting and verification that they’re not worth it, especially once you already have fast non‑agentic tools.
Effective use cases and workflows
- Popular uses: generating boilerplate, peripheral tooling (logging, data collators, scripts), tests, documentation, TODO/FIXME resolution, simple refactors and framework translations.
- Several describe structured workflows: problem discussion → design phase → minimal code example → detailed review → final implementation, often constraining what parts of the codebase the AI may touch.
- Tricks include: localizing new dependencies in the repo, patch‑file workflows, custom markers in code, and project‑specific “rules” files to reduce collateral damage.
Problems at scale, context, and reuse
- Many report agents degrade badly beyond ~10–15k LOC: short context leads to duplication, lack of reuse, missed existing components, and inconsistent styles, types, and libraries.
- Complex, long‑lived, multi‑layered enterprise codebases are seen as far beyond what current agents can safely modify autonomously.
- Some propose an explicit architectural model/graph (likened to UML) to give agents a “big picture,” but this is speculative.
Model limitations and outdated knowledge
- Multiple comments note models feel “stuck” on pre‑2022 stacks, defaulting to old libraries/frameworks unless aggressively steered.
- Non‑web or niche domains (C++, PySide/QML, GLSL, math like angle averaging) expose brittle reasoning.
- Agents often fix failing tests by hacking production code to satisfy them, or by tweaking environment (e.g., memory limits) instead of addressing root causes.
Skills, roles, and developer experience
- Metaphors shift devs from “builder” to “shepherd,” “editor,” or “manager”; the e‑bike analogy is popular: you still pedal and steer, but can go farther.
- Some worry AI erodes deep understanding, reasoning, and craftsmanship, especially for juniors who may learn more from agents than humans.
- Others argue experts remain essential: you must already know how to design, constrain, and review for AI to be safely useful.
Productivity and hype
- Experiences range from “5–10x boost” to “20% useful, 80% breakage.” Everyone agrees on the need for thorough human review.
- Several compare current claims to the self‑driving car hype cycle: impressive assistance, but autonomous, reliable coding on non‑trivial systems is seen as far off.