Show HN: Atari Missile Command Game Built Using AI Gemini 2.5 Pro

Gameplay & Design Feedback

  • Early levels allow rapid cash accumulation; players can nearly clear the entire store on first visit, removing strategic upgrade choices.
  • Some found gameplay degenerates into frantic clicking, lacking the timing, ammo scarcity, and chain-reaction satisfaction of classic Missile Command.
  • Developer responded by adding/adjusting chain-reaction explosions; players reported this improved fun.
  • Balance issues: “sonic wave” can trivialize later levels; game appears to stall after level 29.
  • Missing/changed mechanics vs original: no obvious friendly-fire penalty, tap-to-shoot on mobile removes turret-rotation difficulty.
  • Visuals and UX critiques: background initially too fast/noisy (later fixed); Tab as a keybinding feels arbitrary.
  • Technical note: game loop currently tied to frame rate; suggestion to decouple for robustness.

AI Workflow & Gemini 2.5 Usage

  • “Initially built with Gemini 2.5 Pro” means: first HTML5 implementation generated in Gemini canvas, then iteratively refined in multiple chats.
  • Later features (store, leaderboard, AI post-game analysis) added over several sessions; other models (Claude, Gemini via Firebase, Gemini via Cline/VS Code) used when one failed or errored.
  • Gemini’s large context window praised for handling long files and ingesting docs, though token cost is a concern. Opinions differ on quality vs Claude/Cursor.

Prompts, Provenance & Reproducibility

  • Multiple commenters ask for the full prompt history; argue that without prompts, “AI built this” claims are opaque.
  • Some treat prompts and chat logs as a kind of requirements spec and suggest checking them into version control or linking via commit messages or UUID tags.
  • Others note non-determinism: same prompts may not reproduce the same code, complicating “reproducible builds” and supply-chain guarantees.
  • Debate whether AI models/prompts are part of the build system or just tools like IDEs and autocomplete.

Quality, Maintenance & “Vibe Coding”

  • Concern that single-file, AI-generated projects become unmaintainable blobs that even LLMs struggle to edit as they grow.
  • Some argue such AI-first codebases may be disposable—cheaper to regenerate than maintain—raising questions about long-term reliability and user trust.
  • Others report success using LLMs for substantial real-world tooling, but emphasize that human domain knowledge and careful review remain critical.

Broader Reflections on AI & Democratization

  • Skeptics see the game as derivative, likely within training data, and not front-page-worthy.
  • Supporters argue this demonstrates software “democratization”: non-programmers can now describe and obtain working apps or games without traditional coding skills.
  • Counterarguments: democratization can mean “enshittification” if it normalizes low-quality, insecure, hard-to-maintain software.
  • Comparisons made to earlier shifts (assembly→high-level languages, IDEs, IntelliSense, digital photography) where old-guard skepticism gave way to new standards.

Security & Reliability Issues

  • AI analysis feature sometimes returns malformed JSON that the game’s parser rejects, exposing fragility in LLM-structured output.
  • A commenter reports prompt-injection vulnerabilities in the game’s analysis API; a security.txt file was added afterward, with offer to discuss details privately.