Slop is not necessarily the future

Good code vs. “slop” and engineering tradeoffs

  • Many argue “good code” means code that is simple, understandable, and cheap to maintain, not aesthetically perfect.
  • Repeated analogies to bridges: engineering optimizes for “good enough” under safety margins, cost, and changing requirements, not maximal durability.
  • Others push back that over‑reliance on “good enough” and short‑term cost cutting produces crumbling infrastructure (and software) and externalizes risk onto users.

Developer “camps” and the false dichotomy

  • A recurring framing splits developers into:
    1. “Product-first” – code is a means to ship features.
    2. “Craft-first” – code quality as a core value.
  • Many commenters reject this as a false dichotomy: good products usually come from people who care both about user outcomes and internal quality.
  • Some note that craft emerges from responsibility: code is a liability; maintainability, performance, and correctness matter over years, not just at launch.

AI-assisted coding: benefits and current limits

  • Proponents say LLMs already write decent “small-scale” code; with good prompts, tests, and review, they can help refactor, document, and accelerate work.
  • Critics describe AI-generated code as structurally unsound “time bombs”: it passes tests short-term but erodes invariants and architecture until the system becomes unfixable.
  • Several report that LLMs struggle especially with design/architecture, invariants, and complex, long-lived codebases; human review becomes more reading/debugging than typing.

Economic incentives and markets

  • One side agrees with the article: maintenance costs, token costs, outages, and lost uptime will economically favor simpler, higher-quality code, even for AI.
  • Others counter that markets often reward “good enough,” lock‑in, and speed over quality (e.g., enterprise software, dominant platforms). Sloppy but entrenched systems can thrive for decades.
  • Flat‑subscription AI tools dilute any direct cost pressure toward brevity or simplicity.

Complexity, outages, and long-term risk

  • Several point to more outages and brittle systems since 2022 and link this to faster code shipping (including via AI) and rising complexity.
  • Concern that agentic tools lack explicit design representations; they just accumulate code and prompts, driving uncontrolled complexity.
  • Fear that critical infrastructure could reach a state where neither humans nor AI can safely evolve it.

Ethics, regulation, and user impact

  • Comparisons to civil engineering and medicine: real-world engineers face licensing and liability; software generally does not.
  • Some see widespread AI slop as a looming security and safety nightmare, especially in domains like healthcare, aviation, finance.