Launch HN: Continue (YC S23) – Create custom AI code assistants

Positioning vs Other Code Assistants

  • Core differentiator is custom, shareable “assistants” composed of rules, prompts, models, docs, tools (MCP), and data blocks, rather than a single monolithic copilot.
  • Vision is that every developer/team has an assistant tuned to their stack, practices, and constraints; hub is likened to an “NPM for assistants.”
  • Some commenters argue Copilot/Cursor already have project rules and will likely converge; others say Continue’s openness, multi-model support, and MCP focus are meaningful advantages.

Custom Assistants, Knowledge Packs, and READMEs

  • Debate over “specialized agents” vs general agents + “knowledge packs”:
    • One side: general-purpose agents with standardized domain/library descriptions (e.g., as metadata or in READMEs) are more scalable and composable.
    • Other side: explicit rules for personal/team preferences and private workflows will remain necessary and more efficient than constant tool calls.
  • Convergence in the thread around “AI-friendly READMEs” and/or lightweight, importable knowledge packs that tools can ingest.
  • Continue’s YAML assistant format aims to serve as such a portable spec; they plan auto-generated rules from project files (e.g., package.json).

MCP, Local vs Remote, and Infrastructure

  • MCP servers currently run as local subprocesses from VS Code; SSE-based remote servers are planned.
  • Authentication and key management are seen as the biggest unsolved issues for hosted MCP.
  • Some dislike that competing editors put MCP behind paywalls; Continue is praised for strong, open MCP support.

Value, Benchmarks, and Use Cases

  • Skeptics question whether this is more than fancy prompting and whether it’s worth paying compared with just using top models directly.
  • Team concedes benchmarks are hard and highly context-specific; suggests users capture usage/feedback data via “data” blocks to quantify benefits.
  • Enthusiasts cite concrete use cases: language-/framework-specific helpers (Erlang/Elixir, Phoenix, Flutter, Svelte, shadcn, Firestore rules), internal workflows, and agentic edit–check–rewrite loops.

Stability, Accessibility, and Business Model

  • Some users report past instability in the extension; founders say 1.0 focused heavily on robustness and testing.
  • Accessibility: supports text-to-speech and has worked with voice-only coders; open to feedback via Discord/GitHub.
  • OSS extension is free; monetization via team/enterprise features and an optional pooled-models add-on. Telemetry is opt-out and documented, with emphasis on letting users collect their own data.