How I build software quickly

Rough drafts, prototypes, and management

  • Many agree with starting from a rough, end‑to‑end draft to discover requirements and “unknown unknowns” in the problem space.
  • Several warn that “draft” code often gets prematurely promoted to production by managers who see a demo and declare it “done.”
  • Suggested mitigations: clearly label work as mockups, deliberately leave visible rough edges, or avoid demoing too-early artifacts.

AI, bad code, and systemic dysfunction

  • Consultants report repeatedly finding long-lived enterprise systems (banks, hospitals, factories) held together with hacks, TODOs, and no tests or version control.
  • AI is seen as accelerating this: more code, faster, with less conceptual integrity. One example: an LLM‑like codebase in a hospital app that deleted all admin users on reboot.
  • Some note this is not new; AI just speeds up an existing pattern decision-makers already don’t understand or resource properly.

Speed now vs long‑term maintainability

  • Several emphasize that initial velocity must be balanced with future speed: tests, docs, decision logs, observability, and good data models pay off over time.
  • Solo devs describe “lab notebooks” and decision logs as crucial for their future selves.
  • There’s broad agreement that APIs, data models, and overall architecture are the hardest things to “iterate out of” later.

Data modeling, architecture, and scale

  • Starting from the database schema (or core data model) is praised as making everything else simpler; getting it wrong leads to painful migrations and operational risk.
  • Small teams can move fast with looser code; in large organizations, architectural mistakes and refactors become exponentially more expensive.
  • Microservices are suggested as a way to keep teams small, but also criticized for adding tech‑stack sprawl and complexity.

Testing philosophy and fast feedback

  • One detailed thread advocates heavy, concurrent black‑box integration tests (API + DB + dependencies), run in seconds, using randomized data and ephemeral DBs.
  • Others caution against over‑optimizing for speed at the expense of realism and low‑maintenance tests; mocks and stubs are seen as both useful and fragile.
  • There’s disagreement on how much unit vs integration vs “in‑between” tests are worthwhile.

“Boring tech”, frameworks, and stack choices

  • A large subthread argues that mastering one “boring” stack (e.g., Django/Postgres) is a major speed advantage; frameworks like Django/Rails/Laravel are praised for rapid CRUD.
  • Debate over SQLite vs Postgres: SQLite is attractive for simplicity and local/CI use, but many warn about concurrency limits and subtle production issues.
  • Others counter that overuse of big frameworks or Kubernetes/Redis for simple apps adds unnecessary complexity; some prefer composable libraries (e.g., Go) despite more boilerplate.
  • Frontend: many claim most apps don’t need SPAs; server‑rendered pages with small sprinkles (HTMX/Alpine, LiveView‑style) can be faster to build and maintain.

Clean code under tight deadlines

  • On game jams and hacky projects, the article suggests deprioritizing code cleanliness; several commenters strongly disagree, saying good habits make them faster even under 24‑hour constraints.
  • Viewpoints differ on whether you should “do it well” on the first pass or embrace messy exploration then rigorously refactor; both camps stress discipline in knowing when to clean up.

Team norms, incentives, and quality

  • A recurring theme is that “good enough” is rarely explicit: ex‑big‑tech engineers and startup veterans often clash over acceptable bug levels and process rigor.
  • Suggestions include team charters to define expectations around tests, refactoring, and quality.
  • Some argue the real enemies of quality are incentives: consumers don’t pay for internal code quality, layoffs and rush culture punish experimentation, and vendor/AI lock‑in may worsen things.