Claude is the drug, Cursor is the dealer
Cursor Usage, Value, and Economics
- Several developers report extremely heavy use of Cursor’s agentic editing for trivial operations, assuming their $20/month likely burns far more in upstream tokens; many doubt Cursor’s profitability under the current “all-you-can-eat” model.
- Some think Cursor’s product is strong enough to be acquired by a lab; others found it one of the least effective AIDEs they tried.
- A number of comments say the killer feature is Cursor’s tab-completion, which some are willing to pay for alone; others find it distracting enough to cancel.
Comparisons: Cursor vs Copilot, Claude Code, Zed, JetBrains, etc.
- Experiences vary widely: some see Cursor as a slower, more expensive VS Code + Copilot + Claude setup; others say the overall workflow feels 3–5x more productive with the right model/IDE combination.
- Claude Code’s IDE/CLI integrations are praised as “already very good,” with completion being the one place Cursor still leads.
- JetBrains’ Junie and other AIDEs (Zed, Kiro, Windsurf, etc.) are seen as behind Cursor in “agentic” workflows, but chat-based editing is viewed as becoming table stakes.
“AI Wrappers” and Moats
- One camp argues tools like Cursor are thin wrappers around LLM APIs, with little defensible moat beyond prompts and UI; they expect labs to “eat the stack” and ship first-class agents like Claude Code directly.
- Others counter that specialized interfaces and workflows are nontrivial to build and maintain; labs may rationally focus on core models while letting ecosystems capture domain-specific value.
- There’s debate over what counts as a “wrapper”: just prompt + generic UI vs products that measurably improve task performance.
- Some argue Cursor and similar apps already have meaningful moat (UX, infrastructure, brand, multi-model routing) and that moats can deepen over time.
Labs vs Integrators: Who Wins?
- One commenter presents data showing lab-native assistants (Claude Code, Gemini CLI, OpenAI tools) gaining adoption faster than Cursor in GitHub repos, suggesting “drugs” may be outpacing “dealers.”
- Others see many labs and intense competition, so model providers can’t simply raise prices on downstream apps like a monopoly.
Future of AI: Hype, Uncertainty, and Possible Crash
- Broad agreement that the 3-year outlook is highly uncertain; many compare this to earlier platform shifts (iPhone, dot-com era) but disagree on whether AI will be more incremental or revolutionary.
- Optimists describe this as the first truly exciting tech moment in decades; pessimists foresee scams, deepfakes, propaganda, job loss, and stronger surveillance.
- Some predict a dot-com-like AI crash within ~3 years, followed by slower incremental gains and more rent-seeking (ads, sponsored responses, price hikes); others note current GPU demand is real, not idle “dark fiber.”
Ads, Influence, and Monetization
- Multiple threads anticipate LLMs will eventually integrate explicit or implicit advertising and paid product placement, especially as search-style ad revenue is threatened.
- There is speculation (but no concrete evidence cited) that training data and outputs are already being shaped by commercial incentives; others push back on these claims as unsupported.
- Some users say they’re willing to pay for ad-free AI specifically as an escape from ad-saturated search.
Skills, Education, and Calculator Analogies
- One analogy: using AI for every integral is like having a roommate shouting answers—will you pass the exam? Concerns center on atrophy of deep skills (calculus, programming).
- Counterpoints compare LLMs to calculators, though others argue advanced math/programming differ from arithmetic: LLMs can confabulate, and success may not transfer to real understanding.
- Existing tools like Mathematica already solve entire calculus problems; some note they crammed techniques for exams and promptly forgot them anyway.
Practical Workflows and Friction
- Some developers report dramatic productivity gains; others say they waste entire days fighting LLMs to get a single test written.
- A suggested strategy for “serious” work is to use multiple models simultaneously, cross-checking and iterating among them.
- One commenter emphasizes a conservative stance: adopt tools late, after they prove durable ROI, rather than chase every AI trend.
Analogies and Meta-Discussion
- The “drug/dealer” metaphor itself is criticized as inaccurate; in real drug economics, logistics and distribution capture most value, complicating the analogy.
- Another analogy compares labs/IDEs to Netflix and content producers, with debate over whether value lies more upstream (models/content) or downstream (distribution/experience).
- Several comments challenge the article’s blanket “no moat” framing as overly simplistic and hyperbolic, preferring more nuanced competitive analysis.