Read your code
Definitions and Terminology
- Strong disagreement over what “vibe coding” means:
- “Originalist” view (attributed to Karpathy): you don’t care about the code, don’t read it, just hammer prompts and error messages until it “kinda works.”
- Newer proposal: “vibe-coding” as dialogue-based, human-guided implementation with an AI, including reading/reviewing code.
- Many argue we need separate terms:
- One for responsible AI-assisted development with review and tests.
- One for “blind” prompting that produces unreviewed code.
- Alternatives suggested: “AI-assisted development,” “LLM-assisted coding,” “orchestration,” “vibe architecting,” etc.
Should You Read AI-Generated Code?
- One camp: refuses to read AI code; focuses on type signatures, prompts, and “see if it works” via testing or trial-and-error.
- Others argue “see if it works” really means “apparently works,” with unknown edge cases and missed requirements.
- Many insist reading code is still essential for:
- Security, compliance, observability, scalability, resilience.
- Long-term maintainability and debugging.
- Some predict that insisting on reading AI output will eventually seem as odd as reading compiler output; others say we are “nowhere near” that yet.
Quality, Risk, and Engineering Discipline
- Concern that vibe coding without review leads to:
- Security holes, fake features, brittle hacks, unmaintainable bloat.
- Comparisons between LLMs and compilers:
- Compilers are deterministic and grounded in formal semantics; LLMs are fuzzy, non-deterministic, and version-fragile.
- Several see this as the opposite direction from software as engineering, which should move toward more formal, reproducible processes, not less.
Impact on Learning and Roles
- Worry that seniors are now “shepherding AI agents” instead of mentoring juniors, reducing opportunities to learn.
- Vibe coding is seen as especially harmful for beginners who can’t yet read code well and are handed large, messy AI-generated codebases.
- Others report positive stories: non-programmers using LLMs to build apps, then organically learning concepts like refactoring and architecture.
Working Patterns with LLMs
- Common advice:
- Treat the AI as a junior: always review and understand its code before execution.
- Maintain strong practices: tests, staging, human review, ownership rules (“you commit it, you own it”).
- Use LLMs for boilerplate, CRUD, tests, and refactors; rely on experienced humans for novel, complex, high-risk functionality.
- Some emphasize that reading and reviewing can be as time-consuming as writing, so over-reliance on AI may hit a ceiling without loosening review requirements—at the cost of risk.