Superpowers: How I'm using coding agents in October 2025
HN title rewriting complaint
- Several comments criticize HN’s automatic removal of “How” from titles, arguing it often distorts meaning.
- In this case, people note the change reverses the intended relationship between “superpowers” and “coding agents,” making the title misleading.
Tone of the article: excitement vs. satire/voodoo
- Many readers find the writeup fascinating but “reads like satire,” especially the “feelings journal” and therapy‑style agents.
- Multiple commenters describe the approach as “voodoo” rather than engineering—lots of ritualistic prompt text, persuasion tricks, and emotional framing.
- Others defend it as creative experimentation that uncovers genuinely new techniques.
“Skills” concept, prompts, and subagents
- Core idea: external “skills” are markdown instructions the model can pull in as needed, often discovered by having the LLM read books or docs and extract reusable patterns.
- Some see this as just structured context / few‑shot prompting with extra ceremony; others stress that skills don’t consume context until invoked and that “agents as tools” (subagents) are an important pattern for isolating noisy subtasks.
- There’s confusion over how skills differ from tools, custom commands, or a single well‑crafted global prompt (e.g., CLAUDE.md); some think the system is over‑engineered.
Demand for benchmarks and concrete value
- Repeated calls for A/B tests, metrics, and non‑trivial, end‑to‑end examples on real codebases.
- Skeptics note that most posts are anecdotal “vibes,” with cherry‑picked success stories; they fear many layers of complexity are being added without evidence they outperform simpler prompting.
- A few links to more rigorous or at least more concrete experiments are shared, but even those are critiqued for relying on self‑reported gains.
Experiences with coding agents: powerful but brittle
- Some commenters report large productivity boosts, especially on repetitive or boilerplate tasks, debugging, tests, and web work—likening LLMs to a gas pedal or electric bike: faster, but you must steer and still get tired.
- Others find agents create messy, duplicated, or context‑ignorant code, especially on larger or more idiosyncratic codebases; for them, fixing AI output is slower than writing code directly.
- Many emphasize that effective use feels like managing an intern or junior team: you must specify work precisely, maintain design docs/specs, and review every line.
Meta‑skill and complexity concerns
- Some feel the “agentic coding” ecosystem (skills, subagents, journals, persuasion prompts) is racing ahead of mainstream developers, turning programming into managing opaque meta‑systems.
- Several argue that a modest setup—a single, carefully written project prompt, short tasks, and tight human control—is enough, and that elaborate multi‑agent workflows may not justify their cognitive and token costs.