Fast

LLMs, agents, and (not) being fast

  • Many report LLM-based “agents” are slow and often net unproductive: 10–15 minutes of agent work, then hours of review and rework.
  • Inline/IDE completions and “advanced find/replace”–style prompts are seen as the only consistently fast wins (e.g., transforming all logic touching X, mirroring a logic flow in reverse).
  • Some see 40–60% speedups for “senior-level” work, but others say they spend less time typing and more time debugging and correcting, canceling the gains.
  • Strong desire for subsecond, low-latency assistants even if they’re less “smart”, vs today’s slow but higher-benchmark models.

Traditional tools vs AI refactoring

  • Emacs/vim users argue grep/rg + macros + language servers remain faster and more reliable for many refactors.
  • LLM proponents counter that for non-mechanical changes and code with messy semantics, agents can do large structural rewrites more quickly, though diffs still require careful review.
  • Some say if you need an LLM to sweep through code changing all logic around a concept, it’s often a sign of poor architecture—though legacy and constrained environments frequently force this.

Thinking vs outsourcing to AI

  • Multiple comments note devs often “work for hours to avoid one hour of thinking”; tools like TLA+ exist to force deeper reasoning but are resisted.
  • Several use LLMs as rubber ducks or design-doc writers, not coders: they dictate messy ideas, have the model produce structured specs, then code themselves.
  • Others worry that letting LLMs write code directly erodes developers’ own skills and understanding.

Speed as a product feature

  • Many agree: fast tests, builds, deploys, and UIs materially change behavior and productivity. Latency strongly influences how often experiments are run and how much code is shipped.
  • Examples: Godot vs Unity, Obsidian vs Notion, fast Python/Rust tooling (uv, ruff), terminals and editors, and HN itself vs heavier web UIs.
  • Some call speed “the most fundamental feature”; others stress it’s a currency traded for safety, reliability, or richer UX.

Tradeoffs, skepticism, and promotion

  • Several warn that prioritizing speed without robustness just doubles the rate of bad outcomes; “fast” must be coupled with “well”.
  • Users note modern software often feels slower than 1990s systems (CRTs, old POS terminals, TV channel surfing) despite far better hardware.
  • A few criticize the article as thin and partly promotional, pointing out the author sells a “fast” scraping/botting tool with CAPTCHA evasion, raising ethical concerns.