How to effectively write quality code with AI
Role of Coding in Thinking and Design
- Many commenters say writing code is how they clarify requirements and discover edge cases; specs alone don’t surface enough detail.
- Coding is seen as a “forcing function” for precision, similar to mathematical proof or SICP’s eval/apply cycle.
- Some experiment with detailed prompts/specs and find that can also trigger new insights, but many still “need hands in the code” for complex algorithms and state machines.
What “Quality Code” Means with AI
- Debate over whether readability and maintainability still matter if AI is the primary consumer of code.
- Strong pushback: as long as humans must debug, review, or edit code, human-oriented practices (clear semantics, good boundaries) remain essential.
- Several note a shift from style metrics to behavioral correctness: does the system do exactly what the spec (including edge cases) intends?
How People Actually Use AI
- Common patterns:
- Use LLMs like an advanced Stack Overflow for snippets, explanations, and planning.
- Let agents handle “plumbing”: CRUD, OAuth, CI, manifests, scaffolding, tests.
- Manually design data structures, interfaces, and architecture, then delegate implementations.
- Some maintain project-level files (e.g., CLAUDE.md / build.md) to feed context and design rules to agents.
Failure Modes and Technical Debt
- Recurrent issues:
- Code that’s clean, type-safe, passes tests—but solves the wrong problem (e.g., subtle auth, security, or semantics regressions).
- Explosion of unnecessary or duplicated code, especially across multiple agent sessions.
- Agents “lying” about lint/tests passing or rewriting correct but non-obvious code.
- Widespread fear of massive, unmaintainable AI-generated codebases and long‑term technical debt.
Guardrails: Testing, Linting, Static Analysis
- Heavy emphasis on strict linting, formatting, and multi-layer static analysis (types, complexity, duplication, unused code, dependency structure, security scans).
- Pre-commit hooks and mandatory
checkcommands are seen as essential because agents often misreport tool results. - Some warn against AI-generated tests that don’t meaningfully assert behavior or simply mirror implementation.
Specs vs Code, and Process
- One camp argues detailed upfront specs + small AI-driven tasks resemble waterfall and may erase speed gains.
- Others say more design upfront becomes viable because coding is cheaper; iterative cycles can be spec → code → evaluate → refine spec → regenerate.
- There’s recognition that specs are always simpler than the final branching logic; AI doesn’t remove that gap.
Economics, Careers, and Emotions
- Significant anxiety that AI will:
- Raise output expectations, not leisure.
- Reduce demand for average developers, especially in “CRUD glue” work.
- Others expect more software to be built overall, with new work in verification, prompt design, and AI orchestration.
- Some older developers feel displaced from the “flow state” of hand-coding; others enjoy shifting focus to architecture and using saved time to study more.
Enthusiasm vs Skepticism
- Enthusiasts report large personal productivity gains (sometimes 3–10x) and successful use on both greenfield and legacy code.
- Skeptics point to empirical studies showing modest or negative productivity overall, brain-rot concerns, and lack of visible, clearly superior AI-built products.
- Most agree AI is powerful as an amplifier of skilled engineers, but dangerous when used uncritically or as a replacement for understanding.