Automatic Programming
Accountability and code quality
- Many argue developers remain fully accountable for any code they merge, regardless of whether an LLM wrote it; team policies already assign ownership to whoever pushes the change.
- Others worry that using generated code you don’t fully understand is technical debt, making later debugging expensive and painful.
- Several comments liken LLMs to powerful IDEs or instruments: you’re responsible for how you wield them, including testing and verification.
Spectrum from “vibe coding” to “automatic programming”
- Multiple posters reject a binary split between “vibe coding” and “automatic programming,” seeing a continuum of guidance and understanding.
- “Vibe coding” is often used pejoratively to mean shallow, first‑draft, slop‑like code; others note it can still be useful for quick experiments or non‑experts.
- Some suggest reframing the skill as “feedback/verification engineering”: constructing feedback loops and tests that keep the model aligned with a precise spec.
Terminology debates
- Several participants dislike the term “automatic programming” because the process is not actually automatic; humans still steer and review.
- Others think separate labels are unnecessary: it’s all just “programming” with better power tools, like compilers or CAD.
- Alternative labels appear: “LLM‑assisted programming,” “zen coding,” “spec‑driven development,” “lite coding,” etc.
Ownership, attribution, and training data ethics
- One side: LLMs are tools; the output is a function of user skill, so the resulting code is “yours” if you take responsibility.
- Opposing side: it’s a collaboration with the model and, indirectly, with the (often uncredited, sometimes unwilling) human authors whose work was used for training.
- Concerns include:
- LLMs reproducing recognizable code from books/blogs without attribution.
- Open‑source licenses (MIT, GPL) requiring conditions that models cannot practically honor.
- Feelings of violation when code is used for training without explicit consent.
- Others counter that all software builds on prior work, question the strength of IP in general, or argue training may be fair use; legal status is described as unsettled.
Spec‑driven development, waterfall vs agile
- Several detailed comments describe a workflow where humans write and iteratively refine specs (often with LLM help), then have agents implement them, treating code as cheap and disposable.
- This is compared to:
- Classic waterfall/spec‑heavy methods (PRIDE, “design by contract”).
- Agile’s focus on short feedback loops.
- Disagreement:
- Some say careful upfront requirements plus AI implementation outperform agile’s “build 3 times” churn.
- Others stress that requirements almost always evolve, so prototypes and user feedback remain essential.
- A recurring idea: AI dramatically lowers the cost of iteration, which can actually encourage deeper planning and multiple rewrites.
Industry impact, hype, and adoption
- Some believe AI‑assisted programming will become the default, making non‑AI coding rare.
- Skeptics say there’s little evidence yet that AI improves overall software outcomes or that the economics are sustainable.
- There’s pushback against AI‑driven FOMO: most teams aren’t working this way; you needn’t panic, but completely ignoring AI is also seen as risky.
- One thread speculates about future royalty models where AI vendors might claim a share of value created with their tools; others doubt that’s viable for general software.
Cultural and emotional reactions
- Several posts express frustration or disappointment seeing admired programmers enthusiastically promote AI workflows, reading it as hype or self‑branding.
- Others romanticize “artisan” programmers and worry that future generations won’t develop deep low‑level skills; some counter that these skills are still required for anything non‑trivial.
- There’s visible polarization: some see AI as an unprecedented creative enabler; others see “slop coding,” energy waste, and weak moral arguments about “collective gifts.”