Claude AI built me a React app to compare maps side by side

Overview: AI-built React/map app as case study

  • OP used Claude to generate ~95% of a React app for side‑by‑side map comparison; had to finish last bits manually due to token limits.
  • Many see this as emblematic: AI can quickly build POCs/MVPs, but final polish and edge cases still require human understanding.

Effectiveness and workflows with LLMs

  • Several commenters report “shockingly good” results using Claude (often with tools like Cursor, v0.dev, aider, VS Code agents) to build full web apps, parsers, and small services.
  • Common workflow: iterative small steps, clear constraints (e.g., “Next.js 14 app router”), frequent refactoring, git branching per feature.
  • Others struggle: models hallucinate APIs, misconfigure Docker, produce buggy code; success seems sensitive to stack, prompt quality, and user experience.

The “last 5–10%” and debugging

  • Shared view: LLMs are strong at boilerplate and UI but weak on tricky bugs, corner cases, architecture, and production hardening.
  • Debugging strategy: treat AI as a junior dev or “compiler for natural language” — review all code, add tests, break problems into smaller chunks, sometimes discard and retry from a different angle.
  • Skeptics argue reviewing/fixing AI code can cost more than writing it oneself, especially for experienced devs and backend/architecture-heavy work.

Learning, skills, and dependence

  • Some non‑experts and career‑switchers feel massively empowered, shipping apps they’d never have finished before.
  • Others worry newcomers will “learn to drive with GPS,” becoming dependent on AI and unable to maintain systems if tools degrade or disappear.
  • Debate over whether AI use impedes or accelerates genuine learning; experiences diverge.

Security, quality, and spam concerns

  • Fears that AI‑generated code might hide vulnerabilities and that mass low‑effort “wrappers around LLMs” will flood the web, similar to SEO or AI‑art spam.
  • Counterpoint: many industries already tolerate expensive tools and complex stacks; as long as real problems are solved, rough edges are acceptable.

Local/open models and hardware

  • Some want fully local, open‑source models on modest hardware to avoid dependence on cloud vendors.
  • Others note mid‑range local models are already viable for this kind of coding, though largest models still need high‑end machines.