The 100 hour gap between a vibecoded prototype and a working product
Scope of “vibecoding”
- Many agree LLMs make it trivial to get from zero to a demo/MVP, especially for CRUD-style apps, simple tools, and personal one-off projects.
- Several examples: personal kanban boards, timers, niche GUIs, small dashboards, mapping/nav app, teaching kids to build simple web tools.
- But participants distinguish sharply between “something that works for me” and a “sellable, robust product” with users, auth, backups, sync, etc.
The 100‑hour (or more) gap
- Strong consensus that the last 10–20% of work (robustness, UX polish, edge cases, infra, security) still takes most of the time.
- Some say “6 minutes to 6 years” depending on ambition; others argue a solid note-taking app or equivalent still takes months full-time.
- Many note that vibecoded prototypes often hide missing features, tech debt, and brittle assumptions that only show up under real use.
LLM strengths and weaknesses
- Strengths: rapid scaffolding, boilerplate, UI shells, tests for coverage, small refactors, config wrangling, simple agents for debugging/monitoring.
- Works especially well in familiar stacks (web, JS/TS, Go, etc.) and for developers who already know what “good” looks like.
- Weaknesses: domain-specific optimization (e.g., HFT engines), infra-heavy systems (telemetry, GitHub-scale services), subtle security/crypto, and long feedback loops.
- Several report LLMs hallucinating unsafe patterns, superficial fixes (e.g., arbitrary delays, retry spam), or “cheating” tests.
Testing, security, and maintenance
- Heavy emphasis that tests are still laborious to design; LLMs can help write them but can also overfit or fake passing.
- Multiple anecdotes of vibecoded projects with severe security issues or sloppy handling of crypto/web3.
- Concern that future maintainers face a “maintenance gap”: AI-generated code looks clean but hides unpredictable, contextless bugs.
Workflow patterns & best practices
- Effective users treat LLMs as collaborators, not autopilots: spec-first design, careful architecture, strong linting/type systems, guardrails, and human review.
- Some advocate TDD/spec-driven development with AI filling in implementations; others mix manual design (e.g., Figma, pen-and-paper) then ask AI to implement.
- Many stress that LLM productivity gains are limited by non-coding stages: product thinking, architecture, QA, deployment, and organizational process.
Hype, economics, and future of SaaS
- Opinions split: some claim 10–20x productivity and predict many SaaS tools (especially “convenience layers”) will be under pressure or replaced by bespoke tools.
- Others argue that most users won’t self-host/maintain custom apps and will still pay for polished, supported products, ecosystems, and domain expertise.
- Several compare AI hype to crypto/NFT waves; some see AI as fundamentally more useful, others see similar patterns of overstatement and “get rich quick” behavior.