GitHub Copilot: The Agent Awakens
Copilot vs. Cursor and other coding assistants
- Many commenters say Cursor currently outperforms Copilot, especially in autocomplete: it “reads your mind” better, can suggest multi-line or next-edit changes, and has powerful quick actions (e.g., layout tweaks, prop plumbing) that turn multi‑minute tasks into seconds.
- Others report being underwhelmed by Cursor in earlier trials but impressed after recent improvements, especially for large refactors.
- Copilot is seen as catching up: bigger context windows, more models, “Next Edit suggestions,” and new agent mode make it “no longer hopeless,” but most still feel Cursor retains a small but noticeable edge.
- Some prefer Windsurf or Codeium over both Cursor and Copilot, citing better sense of the codebase and smoother UX; experiences are mixed and often project‑ or style‑dependent.
Autocomplete vs. chat and agents
- A recurring theme: autocomplete and “next edit” features are the real productivity win; chat UIs and heavy agents are often distracting or produce brittle code.
- Several report that agentic tools (including Copilot Agent, Cursor Composer, Devin, Windsurf) can spiral into errors when asked to do too much, requiring human rescue and making them net‑negative beyond well‑scoped tasks.
Tooling ecosystem and editor integration
- Strong frustration that GitHub has effectively neglected the IntelliJ Copilot plugin, pushing many JetBrains users toward Cursor, Codeium/Windsurf, Augment, or CLI tools like Aider.
- Cursor being a VS Code fork is both a strength (VS Code extensions mostly work) and a liability (Microsoft‑only/DRM’d extensions like Pylance or the C# debugger don’t run). Some say a fork was necessary to implement deeper features (e.g., reading terminals).
Natural-language code search and RAG
- Users want “ask the codebase” capabilities such as finding all places a variable is set without a follow‑up call.
- Some use editor-integrated indexing (Cursor’s own indexer) while others stream whole repos into large‑context LLMs via scripts or tools like yek to answer cross‑file questions.
Reliability, safety, and outages
- Concern about agents suggesting shell commands and users blindly running them; mitigations suggested include dev containers, local history, and filters.
- Skepticism about depending on cloud agents during outages, given experiences with GitHub Actions downtime.
Impact on jobs and careers
- Long, divided debate:
- One side sees tools like Copilot Agent and Project Padawan (assigning issues to an autonomous SWE agent that produces tested PRs) as direct moves to replace especially junior/boilerplate developers and, eventually, broader white‑collar work.
- Others argue this is another hype cycle, akin to 4GLs, low‑code, RoR, or outsourcing: tools will change how developers work but not eliminate the need for humans who understand systems, ambiguity, and business value.
- Many worry specifically about the collapse of junior roles and the training pipeline if agents take over “grunt work.” Advice given: move closer to customers and strategy, and avoid being a pure “ticket taker.”
GitHub’s strategy and messaging
- Some see a contradiction between branding Copilot as a “pair programmer” and simultaneously marketing an autonomous agent that takes issues and returns PRs; this is read as deliberate obfuscation of a replacement agenda.
- Others insist there’s no contradiction as long as humans still define goals and review code, framing agents as better tools rather than substitutes.