Cursor Composer 2 is just Kimi K2.5 with RL
Model Provenance & Licensing
- Discussion centers on evidence that Cursor Composer 2 is built on Moonshot’s Kimi K2.5 model, accessed via an inference provider.
- Early in the thread, some claim Cursor violated Kimi’s modified MIT-style license, which requires prominent attribution above certain revenue/MAU thresholds.
- Others point out that Kimi K2.5 is “open weight,” and the license is designed to allow derivatives, though it’s non‑standard and arguably not “open source” in the OSI sense.
- Later, a statement from the Kimi side (linked in the thread) says Cursor uses Kimi K2.5 via Fireworks as part of an authorized commercial partnership, implying no license breach.
- There is meta‑discussion about whether model weights are even copyrightable and how enforceable such clauses are.
White‑Labeling, Transparency, and Ethics
- Some users feel misled that Cursor markets “its own” model when it is a tuned Kimi base, comparing this to generic white‑labeling or repackaging VS Code.
- Others argue most of the value is in continued pretraining, RL, data, and product integration, not in reinventing a base model.
- Several posts stress that RL and domain‑specific tuning can be a large share of total compute and materially change performance, so “just Kimi with RL” understates the work.
Business Model, Moat, and Competition
- Cursor is seen as an IDE/coding‑agent “harness” company: VS Code fork + model routing + agents + telemetry.
- Some think its moat is thin (open models + VS Code fork are reproducible); others argue the real moat is user data, feedback signals, and UX.
- There’s skepticism about its very high valuation when it doesn’t train full foundation models, and about in‑house benchmarks claiming to beat top closed models.
- Several predict models will commoditize; integration, governance, and being model‑agnostic will matter more.
User Experience & Product Quality
- Many praise Cursor’s autocomplete (“tab”) and coding agents as among the best, especially for inline work and debugging workflows.
- Others complain about bugginess, heavy resource use, degraded editor performance, opaque model routing, and high token consumption versus alternatives.
- Some report migrating to other tools (e.g., CLI‑first coding assistants) despite liking Cursor’s completions.
Broader Themes
- Debate over ethics of “repackaging” open Chinese models and whether reactions would differ if roles were reversed.
- Ongoing concern about ToS‑based “distillation” allegations among AI labs, but applicability to Cursor’s use case is contested.
- Several note that building on open weights with heavy RL and product‑layer improvements is now the industry norm.