Show HN: I AI-coded a tower defense game and documented the whole process

Game impressions & mechanics

  • Commenters find the tower defense game “very cool,” addictive, and visually polished; the rewind-time mechanic draws comparisons to “Edge of Tomorrow.”
  • Suggestions include adding a level editor and UGC-sharing on platforms like Reddit.
  • Players note short length and ask for more content. A small tutorial bug is reported but not reproducible.
  • Some users struggle with energy management; the key tip is to use rewind very sparingly to afford early towers.

Use of AI in development

  • The project is seen as a strong real-world example of AI-assisted coding, especially because prompts and process are documented.
  • Several developers report similar experiences: AI is excellent at boilerplate, wiring up new frameworks/libraries, and quickly exploring unfamiliar tech.
  • Others describe AI as a “junior dev in the driver seat”: fast, but requiring constant supervision and correctness checks.

Prompting, workflow, and “vibe coding”

  • Effective workflows emphasize: clear high-level goals, breaking work into many small tasks, and giving architectural guidance.
  • Some treat AI as a spec/PRD generator (“vibe speccing”) or even ask it to write “scientific papers” describing intended systems before coding.
  • There is disagreement over whether “prompt engineering” is a real skill or just good communication and domain expertise by another name.

Productivity claims and skepticism

  • Enthusiasts report dramatic speedups (up to “100x”) on greenfield or exploratory work, particularly for indie games and one-off tools.
  • Skeptics argue those numbers are exaggerated; they see modest gains (e.g., 10–20 minutes saved per hour) and note that thinking, alignment, and review dominate time.
  • Debate centers on: when reviewing/fixing AI code is slower than just writing it, and how much value experts truly gain.

Tooling and costs

  • Tools mentioned: Cursor (with Claude), Augment Code (praised for context on larger codebases but called unreliable and pricey), JetBrains with Claude integration, Claude Code, Gemini, and others.
  • The author used flat-rate subscriptions rather than per-token billing and estimates 25–30 hours of total work.

Limitations, bugs, and tricky cases

  • Multiple examples show AI struggling with subtle front-end issues (mobile text inputs, CSS layout, htmx integration) and modern APIs, often hallucinating or looping.
  • Commenters stress the need to restart chats, narrow scope, and sometimes fall back to manual debugging and domain knowledge.

Project history & transparency

  • Large initial commit is explained by early days without version control; prompts were reconstructed later from tool histories.
  • Several readers appreciate checking in prompts for traceability, reproducibility, and as a learning resource.