Things that helped me get out of the AI 10x engineer imposter syndrome
LLM Code Quality, Comments, and Tests
- Some report that with good rules, context files, and prompting, LLMs produce code cleaner and more “polished” (logging, error handling, tests) than their own.
- Others find AI-generated comments and tests mostly useless: restating code, focusing on “how” not “why,” tightly coupled to implementation, or missing meaningful assertions.
- Several people immediately tell models to stop writing boilerplate comments/docstrings and instead favor self‑documenting code and focused API docs.
“Vibe Coding” vs Assisted / Agentic Use
- Clear split between:
- Vibe coding: letting the model generate large chunks or whole apps with minimal review – widely seen as producing slop, security issues, and technical debt.
- Assisted/agentic use: humans design, decompose tasks, and use LLMs for boilerplate, refactors, tests, migration scripts, and small features. This is where people see real value.
- Terraform/infra and complex, legacy C/C++/enterprise codebases are recurring failure zones; models hallucinate resources/APIs or thrash in loops.
Realistic Productivity Gains
- Many experienced users converge around:
- 2–5x faster on the typing/writing part of coding,
- but only ~15–35% improvement in overall throughput once meetings, reviews, specs, QA, and coordination are included.
- Gains are largest for: greenfield prototypes, side projects, small refactors, “side‑quests” (docs, tests, scripts), and exploratory work on unfamiliar APIs.
- Several warn that bigger diffs, verbose logging/tests, and shallow understanding can reduce long‑term productivity via maintenance and review burden.
Hallucinations, Verification, and Trust
- Strong disagreement about hallucination prevalence: some claim agents plus compilers/tests effectively eliminate them; others see persistent invented APIs, especially in Terraform, infra, and newer libraries.
- Consensus that LLM output must be reviewed at the same abstraction level a human would be responsible for; you can’t skip understanding just because the tool wrote it.
10x Engineer & Imposter-Syndrome Narrative
- Many view “AI 10x engineer” claims as hype from marketing, VCs, and social media; they don’t match observed team-level velocity.
- Several point out Amdahl’s law: speeding up coding alone can’t yield 10x feature delivery when most work is design, requirements, coordination, and risk management.
- Commenters appreciate the article’s reassurance: you aren’t “standing still” or doomed if you’re not seeing 10x; modest, uneven gains are normal.
Workflows and Best Practices Emerging
- Effective patterns mentioned: dedicated rules/claude.md files, rich local context, architect→plan→implement→test loops, parallel agents on multiple tasks, and using LLMs as search, tutor, and rubber duck.
- Strong engineers report biggest benefits when they already understand the problem and use LLMs to amplify their designs, not replace them.