Vibecoding #2
Alternative tools & “reinventing the wheel”
- Several commenters note the project resembles existing tools (SLURM / AWS ParallelCluster, Capistrano, Fabric, Ansible, Terraform, GNU parallel).
- Some see value in a bespoke, simpler, homelab‑oriented tool; others would default to NixOS + tests or existing orchestration stacks.
- There’s mild concern about spending a day “vibecoding a square wheel,” especially for critical infra code.
Monetization vs OSS for agentic infra tools
- A similar remote‑dev / infra‑on‑demand tool is described; its author is unsure people would pay.
- SaaS for CLI tools is called “gross”; preference expressed for selling libre software or charging only for hosted services (provisioning, monitoring) while allowing self‑hosting.
Cloud cost & safety
- Strong reminders to auto‑shut down EC2/GPU instances to avoid surprise bills.
- People share simple shutdown patterns (timed
shutdown, cron with a keepalive file).
What “vibecoding” means & how to do it
- Some argue this isn’t “pure” vibecoding but AI‑assisted coding.
- One axis: >50% of code produced by AI vs. just occasional help.
- Others report best results from a detailed spec/PRD plus checklists, then having agents implement phases, run tests, and review via automated loops.
AI adoption, FOMO & pricing
- Debate over whether the author is “late” to AI: some say most engineers now use AI; others say many colleagues ignore it.
- Strong sense of FOMO for some; others see it as hype with little real payoff yet.
- Experiences range from $20/month plans being ample for “assistant” use to $100–$200 tiers needed for heavy, agentic workflows.
- Confusion and discussion around per‑million‑token pricing and why some subscriptions feel far cheaper per unit.
Positive experiences & workflows
- Multiple reports of 10x+ speedups for side projects, small tools, and hobby games, especially for “yak‑shaving” automation and throwaway scripts.
- Patterns: “snipe mode” (targeted bugfixes, small changes) works well; full‑feature generation is fun but suspect for long‑term maintenance.
- Some use agents as advanced codebase search and refactoring assistants, not as autonomous builders.
Skepticism, quality & human factors
- Complaints about bloated, hard‑to‑review AI PRs, early‑2000s enterprise patterns, and more RCA incidents tied to overlooked mistakes.
- Concern that AI accelerates “rewrite instead of fix” behavior and deepens development‑hell.
- Mixed reports on agents for serious work: helpful for simple CRUD/Web tasks, often weak for niche domains (e.g., complex scraping, game dev, hardware design).
- Broader critique that AI can’t fix product “enshittification,” which stems from incentives, not coding speed.
Local vs hosted models
- Some want local models for privacy but find the ecosystem confusing; others bluntly say local LLMs are still far behind Claude/Gemini/OpenAI for serious coding.
Reflections on careers & time
- Older and retired developers describe AI as finally letting them ship projects they never had time or focus to complete.
- A few feel bored or alienated by prompt‑driven workflows and question staying in the field if that becomes the norm.