I know when you're vibe coding
What “vibe coding” is capturing
- Many see “vibe coding” as dumping AI‑generated code into a repo without understanding or integrating it into existing patterns: new HTTP clients instead of shared utilities, duplicate helpers, classes in functional React code, ad‑hoc config changes, etc.
- Several argue this isn’t new or AI‑specific: rushed or inexperienced humans have always reinvented wheels, mixed styles, and ignored conventions—LLMs just scale that behavior.
- Others disagree, saying LLM misuse produces a distinctive volume and style of slop that feels worse than typical hurried human work.
LLM capabilities and trajectory
- Some commenters are very bullish: newer models are described as context‑aware, able to respect project style, and eventually likely to outperform humans on most programming tasks.
- Strong skeptics counter that LLMs are just stochastic token generators, not true abstractions like compilers; they still hallucinate, still don’t follow specs deterministically, and are constrained by training data and business economics.
- There’s disagreement over whether models have “actually” improved much recently: some cite benchmarks and real‑world coding, others say failure modes are unchanged or models feel “nerfed.”
Context, rules, and tooling
- A recurring theme: problems mostly arise from short or dirty context and weak “context engineering.”
- Suggested mitigations: linters, formatters, strict typing, tests, repo indexing, large context windows, CLAUDE.md / Cursor rules / project‑specific guidelines, and sub‑agents to keep contexts clean.
- However, people report that models often ignore rules or forget them as context grows; instructions are seen as helpful but not reliable hard guardrails.
Impact on teams, juniors, and productivity
- Many treat LLMs as ultra‑junior devs: helpful for boilerplate but requiring tight scoping, explicit specs, and thorough review.
- Concerns: code review load explodes, job satisfaction drops (less “writing code”, more cleaning slop), and a generation of developers may learn less deeply.
- Several note that LLMs can make weak or mediocre developers much faster at producing bad code; “net‑negative programmer” risk is raised.
- Empirical impact on productivity is contested; some see genuine speedups, others say the de‑slopping time cancels any gains.
Quality, tech debt, and incentives
- Strong emphasis from some on caring about consistency, architecture, and long‑term maintainability; others argue pragmatically that not all imperfection is “tech debt.”
- Documentation and institutional knowledge are highlighted as chronic weak points; LLMs can help surface existing utilities in large, poorly documented codebases, but also learn bad patterns from them.
- Several tie the issue to incentives: enterprises reward shipping and volume over craftsmanship, so many developers and non‑technical users will happily accept “works on the surface” AI output.
Alternative attitudes toward AI coding
- Some experienced developers claim AI‑written code is often as good or better than average human code, especially when it comes with explanations and is used like a smarter Stack Overflow.
- Others use metaphors: LLMs as “hunting dogs” or “English shells” that excel at local, tedious work but must be led by humans who own architecture and judgment.
- A minority openly embrace “vibe coding” as a way to offload boring complexity onto machines, even if it produces uglier code, as long as it runs.