Ask HN: How much better are AI IDEs vs. copy pasting into chat apps?
Autocomplete and in‑editor AI experience
- Several people find AI autocomplete (Cursor, Copilot, Windsurf, etc.) intrusive: it fights them on blank lines, suggests irrelevant code, and even “hallucinates” non‑existent properties/methods, breaking trust in IDE hints.
- Others report the opposite: modern Copilot + JetBrains/VS Code feels like a strong pair programmer, with high‑quality, context‑aware completions and automatic refactors, tests, and commit messages.
- Some disable aggressive autocomplete and use AI only for explicit edits, refactors, or boilerplate.
Agentic IDEs vs chat + copy‑paste
- Pro‑IDE camp: agentic tools (Cursor, Claude Code, Cline, Roo Code, Windsurf, Copilot agent mode, Zed, JetBrains AI, etc.) can:
- Search and understand the repo, pick relevant files, generate plans.
- Edit multiple files, run tests/CLI commands, and validate before “done.”
- Greatly speed up chores, cross‑file refactors, onboarding to unfamiliar areas.
- Pro‑chat camp: copy‑paste into ChatGPT/Claude/Gemini feels simpler and safer:
- You explicitly control what context the model sees and what changes you accept.
- Better for “deep” thinking (architecture, algorithms, learning new tech) than for mass edits.
- Many feel less “out of touch” with their own code this way.
Context handling and large codebases
- IDE vendors claim “whole codebase” awareness, but multiple reports say:
- Tools actually select subsets via embeddings/search; behavior degrades as projects grow.
- Users often must manually @‑mention or add specific files/folders to maintain accuracy.
- Tools like aider, RepoPrompt, gptel, and custom plugins focus on explicit, manual context selection and diff application.
Cost, limits, and access models
- API/CLI tools (Claude Code, some agents) can get expensive per session; people quote $3–5+ per task.
- Flat‑fee or “effectively free” options (Cursor subscription, Gemini free tier, Copilot in IDEs) are attractive, especially for hobbyists.
- Many prefer to consume AI via existing chat subscriptions rather than pay per‑token APIs; some plug their own keys into agents to control spend.
Reliability, scope, and security
- Mixed quality reports: some see “another level” productivity, others roll back disastrous multi‑file edits or bloated code.
- Strong sense that LLMs excel at greenfield code and small, well‑scoped changes, and struggle more on large legacy systems.
- Security concerns (e.g., hedge‑fund code) lead some to avoid cloud IDEs or push toward self‑hosted/local models.