I'm helping my dog vibe code games

Overall tone and reception

  • Many readers found the project delightful, whimsical, and “peak HN”: a playful, well-executed hack with good writing and a cute dog.
  • Others were annoyed or baffled it hit the front page, seeing it as gimmicky “dog as mascot” on top of yet another LLM demo.
  • Several treated it as satire or social commentary about AI hype, “vibe coding,” and what counts as “creating” software today.

Is the dog doing anything? Randomness vs intent

  • A major thread: the dog is essentially an entropy source. People noted you could substitute /dev/random, a roulette wheel, plants, weather, etc.
  • Some argued the input “doesn’t matter at all”; all meaningful intent lives in the long system prompt and scaffolding.
  • Others said the randomness does matter in the same way random seeds, clouds, or stars invite interpretation—though still not “authored” by the dog.
  • A subset called the blog title clickbait because the dog isn’t actually expressing preferences or giving feedback on the game.

Scaffolding, feedback loops, and “vibe coding”

  • Many highlighted the key insight: quality came not from clever prompts but from tooling that let the model lint, inspect scenes, run tests, and playtest.
  • This fed the claim that “engineering is in the scaffolding, not the prompting”; the LLM is more an execution engine inside a larger system.
  • Critics countered that (a) the prompt is still heavy-handed intent, and (b) the resulting games are low-tier “itch.io shovelware”, so intent and design skill still matter.

AI as slot machine, output quality, and artistic value

  • Multiple comments compared LLM use to gambling: random seeds, superstition around “magic prompts,” multi-run sampling UX mirroring casino design.
  • Some see vibe-coded indie games as “slop factories” that devalue craft; they argue AI should expand solo dev scope, not mass-produce 6/10 games.
  • Others embrace throwaway, experimental outputs as valid art or fun tinkering, especially when clearly framed as a joke or experiment.

Jobs, economics, and anxiety about AI

  • Long subthreads debated whether projects like this herald the death of software development as a trade or just another tech hype bubble.
  • One side: if random noise + scaffolding yields working software, then “prompt skill” is flimsy job security; much white-collar work could be next.
  • The other side: tech has always displaced trades; society overall gains if “billions can spin up software on demand,” even if some careers die.
  • Opponents stressed real harms: unemployment, loss of healthcare, concentration of power/wealth, environmental cost, and lack of social planning.

Technical discussion: engines and LLM ergonomics

  • Several appreciated the detailed notes on engines: Godot worked best because its .tscn scenes are human- and LLM-editable text; Unity’s YAML and Bevy’s ecosystem were harder for the agent.
  • People discussed issues like non-unique IDs in Godot files and how linters and explicit agent instructions can “bend” LLM weaknesses into solvable engineering problems.
  • Some predicted tools and formats will increasingly be designed to be “LLM-legible.”

Ideas for a ‘real’ Dog-in-the-Loop (DiL)

  • Multiple commenters wanted the dog truly in the feedback loop:
    • Buttons or mats mapped to choices, or bark/eye-tracking on a screen.
    • Tail-wag or attention detection as reward signals for game variants.
    • Games explicitly tuned to what the dog enjoys (chasing, barking at on-screen animals, etc.).
  • This was framed as both a more honest experiment and a way to explore “alignment” and intent with non-human users.