Ask HN: Cursor or Windsurf?
Overall sentiment on Cursor vs Windsurf
- Many use Cursor or Windsurf daily and find both “good enough”; preference often comes down to UX details.
- Cursor is often praised for:
- Exceptional autocomplete / “next edit prediction” that feels like it reads intent during refactors.
- Reasonable pricing with effectively “unlimited but slower” requests after a quota.
- Windsurf gets credit for:
- Stronger project‑level context and background “flows” that can run in parallel on bugs/features.
- Better repo awareness for some users, but others complain it only reads 50–200 line snippets and fails on large files.
- Several people who tried both say Cursor “just works better” day‑to‑day; a smaller group reports the opposite, or that Windsurf solves problems Cursor repeatedly fails on.
Zed and other editor choices
- Zed has a vocal fanbase: fast, non‑janky, good Vim bindings, tight AI integration (“agentic editing,” edit prediction, background flows).
- Critiques of Zed: weaker completions than Cursor, missing debugging and some language workflows, Linux/driver issues for a few users.
- Some stick to VS Code or JetBrains plus Copilot, Junie, or plugins (Cline, Roo, Kilo, Windsurf Cascade) rather than switch editors.
- A sizable minority ignore IDE forks entirely, using neovim/Emacs + terminal tools (Aider, Claude Code, custom scripts).
Agentic modes vs autocomplete / chat
- Big split:
- Fans of agentic coding like letting tools iterate on tests, compile errors, and multi‑file changes in the background.
- Skeptics find agents “code vomit,” resource‑heavy, and hard to control; they prefer targeted chat plus manual edits.
- Some report better reliability and control from CLI tools (Claude Code, Aider, Cline, Codex+MCP‑style tools) than from IDE‑embedded agents.
Cost, pricing, and local models
- Flat plans (Cursor, Claude Code, Copilot) feel psychologically safer than pure pay‑per‑token, but can be expensive at high usage.
- BYO‑API setups (Aider, Cline, Brokk) are praised for transparency; users share wildly different real‑world costs, from cents to $10/hour.
- Local models via Ollama/LM Studio/void editor are used for autocomplete and smaller tasks; generally still weaker than top cloud models but valued for privacy and predictable cost.
Workflow, quality, and long‑term concerns
- Several worry that heavy agent use produces large, poorly understood, hard‑to‑maintain codebases.
- Others report huge personal productivity gains, especially non‑experts or solo devs, and see AI tools as unavoidable to stay competitive.
- Many now disable always‑on autocomplete as distracting, keeping AI as:
- On‑demand chat/rubber‑ducking,
- Boilerplate generation,
- Parallel helper for tests, typing, or trivial refactors.
- Consensus: tools evolve so fast that any “winner” is temporary; the practical advice is to try a few and keep what fits your workflow and constraints.