Using coding assistance tools to revive projects you never were going to finish

Cloud vs Local Models and Cost

  • Debate over paying for cloud AI subscriptions vs investing in local hardware.
  • Some see $20/month coding assistants as trivial and multipurpose; others find $200/month tiers prohibitive.
  • Local setups (Mac Studio, high‑end AMD/RTX PCs) praised for control and avoiding “nerfed” cloud services, but criticized as costly, noisy, and slower than top hosted models.
  • Open models via local runtimes or hosted gateways (e.g., OpenRouter) seen by some as good-enough, by others as “glorified autocomplete” unless run on very strong hardware.

Use Cases: Reviving and Building Personal Projects

  • Many report resurrecting old or abandoned projects: games, note apps, editors, networking setups, home servers, old mods, and internal tools.
  • Coding assistants help re-understand old code, modernize dependencies, plan features, and generate missing UI, tests, and glue code.
  • Game dev examples include Godot, Bevy, and custom engines, with LLMs assisting in procedural content, simulation logic, and tools rather than full game creation.
  • Numerous “personal tools” built that would be uneconomical to buy or sell: niche GUI utilities, search libraries, admin panels, save editors, automations.

Perceived Benefits

  • Major gains cited in:
    • Getting past “half-finished wall” and re-entry overhead.
    • Offloading boilerplate, refactors, and plumbing to focus on design and domain logic.
    • Exploring alternative architectures cheaply via mass refactoring.
    • Making non-coding hobbies more enjoyable by quickly building supporting software.

Limitations and Frictions

  • Local models often too slow/weak for complex agentic workflows; hardware requirements high.
  • Models struggle with visual/scene assets, engine-specific formats, and staying consistent across sessions.
  • Productive use typically requires careful setup: CI, isolation, reproducible commands, headless modes, and explicit testing hooks.

Skepticism, Quality, and “Slop”

  • Critics see a surge of shallow, low-quality “AI slop,” little learning, and reduced personal pride or attachment.
  • Some argue effort used to filter out bad ideas; LLMs now make it too easy to pursue every whim.
  • Concerns about astroturfing/marketing: posts that read like corporate propaganda draw suspicion.
  • Others counter that for personal tools, maintainability and elegance matter less than solving one’s own problem quickly.

Skills, Learning, and Future Outlook

  • Disagreement on “deskilling”: some fear losing problem-solving practice; others argue skills can be (re)learned and that AI simply shifts effort toward architecture, scoping, and integration.
  • Several expect a future where bespoke “self-source” apps are routinely generated to spec, flooding the world with ultra-niche software and lowering the economic value of such projects.