After two years of vibecoding, I'm back to writing by hand [video]
Scope and Balance of AI-Assisted Coding
- Many commenters say current agents can’t replace hand-written code but are useful for tedious, low-risk tasks: small scripts, boilerplate, config wiring, refactors, and tests.
- Good fit: non-critical tooling, one-off utilities, CRUD frontends on top of robust backends, Streamlit/Shiny-style demo apps.
- Poor fit: critical systems (payments, ERP, core business logic), complex math/parallelism, architecture and design.
Vibecoding vs Targeted Assistance
- “Vibecoding” (letting agents build entire apps from vague specs) is widely seen as brittle: results may “work” but be incoherent, hard to maintain, or subtly wrong.
- Several people report that asking an LLM to complete an app from epic-level descriptions “kinda works” for toy projects, but is clearly unacceptable for real products.
- A recurring complaint: tools over-refactor, add unnecessary complexity, or touch many files when a simple, local fix would suffice.
Responsibility, Quality, and Technical Debt
- Strong consensus that responsibility for code remains entirely with the human; AI won’t be blamed when things fail.
- Concern that management will use AI to push “ship faster” culture, increasing volume of low-quality code and incidents.
- Some argue AI can actually improve rigor when used with strong artifacts (design docs, tests, structured agents); others see it mainly as a way to generate more technical debt faster.
- Tests and green CI are called out as a false sense of safety when coverage or assertions are weak.
Effects on Thinking and Craft
- Several note AI helps with “kinetic” coding (typing, boilerplate) but can weaken the developer’s mental model and architectural thinking if overused.
- Others argue it frees time to think more deeply about real problems, analogous to moving from low-level programming to higher abstractions.
- Some express discomfort or sadness at losing the “tasty bits” of hands-on problem solving and learning, especially when AI is used to implement things “too hard” for the programmer to understand.
Careers, Hiring, and Industry Dynamics
- Many aren’t personally worried about being replaced “right now,” but are worried about:
- Perception-driven hiring freezes and expectations that fewer devs can do more with AI.
- Especially grim prospects for juniors, who struggle to get initial experience.
- Difficulty distinguishing real skill from LLM-boosted interview answers and contractor work.
- Self-driving cars are a popular analogy: big gains for assistance, but fully autonomous replacement may be much farther away than hype suggests.
Middle-Ground Practices
- Common advice:
- Use AI for small, well-scoped tasks and boilerplate; avoid giving it end-to-end ownership.
- Break work into small tickets, keep a refactoring backlog, and enforce code review and CI equally for human and AI changes.
- Be explicit in prompts (e.g., no extra refactors, minimal changes) and treat the model like a junior dev whose work must be checked.
- There’s broad rejection of all-or-nothing positions: both “LLMs are useless” and “LLMs will do everything” are seen as unhelpful extremes.