The peril of laziness lost

Laziness, Abstraction, and Code Quality

  • Many agree the “virtue of laziness” is useful: friction forces you to understand the problem and avoid unnecessary work and abstractions.
  • Several advocate “WET / Rule of three”: duplicate code once or twice, then abstract only if a clear pattern emerges; wrong abstractions are costlier than duplication.
  • Others warn that abstractions don’t automatically simplify systems; leaky or premature abstractions can be worse than repetition.

LLMs, “Vibe Coding,” and Slop

  • A recurring theme: LLMs aren’t “lazy” and will eagerly generate large, overbuilt solutions (e.g., custom parsers, SPAs, platforms) where a simple script or existing library would do.
  • People describe PRs with thousands of lines of redundant or irrelevant code, produced quickly without the usual human “this is dumb” filter.
  • Some see LLMs as poor at forming and maintaining a coherent system-level mental model, so they accumulate “sloppy” local fixes and dead-end abstractions.

Metrics: LOC, Tokens, and Productivity

  • Bragging about huge line counts is widely ridiculed; many point to historical stories where deleting code was the real productivity gain.
  • LOC is viewed as a terrible metric, now made even worse when code is AI-generated.
  • Token usage is seen by some as a slightly better heuristic (amount of “thinking” offloaded), but others note it can be gamed just like LOC.
  • One team reports roughly 50%+ productivity gains with LLMs when measured against pre-LLM sprint estimates, but stresses careful validation.

Testing, Verification, and AI Limitations

  • LLM-generated tests are often shallow, redundant, or misdirected; sheer quantity of tests does not equal rigor.
  • In scientific domains, LLMs tend to favor common but not necessarily relevant validation cases and often neglect mathematical verification.
  • Some suggest property-based testing, mutation testing, and adversarial “code vs test” agents, but emphasize that humans still need to design or at least vet the critical tests.

Value, Responsibility, and Ethics

  • Multiple comments stress that the real metric is value created net of all costs: maintenance, security, legal exposure, and future cleanup.
  • There is dispute over how much disasters in software-intensive systems are due to bad code versus human corruption and governance failures.

Human Craft, Careers, and Culture

  • Several defend craft, strong abstractions, and deep domain knowledge as still non‑commoditized, even in an AI-heavy world.
  • Others see a risk of “slop drowning” thoughtful design and worry that fast, AI-assisted output will erode standards and reward the wrong behaviors.