Cursor IDE support hallucinates lockout policy, causes user cancellations

Incident and immediate reaction

  • Cursor users reported being logged out across devices; an email from “support” claimed this was due to a new “one device per user” policy.
  • A Cursor developer later replied (on Reddit) that no such policy exists and blamed an AI front‑line support bot plus a session “race condition” bug.
  • Many commenters found this incident emblematic of AI hype outpacing capability and of putting LLMs in places where precision and trust are critical.

Was it really an AI hallucination?

  • Some suspect the “rogue AI” story is a convenient cover for an unpopular policy or general cost‑cutting in support.
  • Others argue it would be an odd thing to lie about because it publicly showcases the unreliability of the very tech Cursor sells.
  • The fact that multiple users reportedly got the same fabricated “policy” response led some to infer prompt design or fine‑tuning, not pure random hallucination.

AI in customer support

  • Many see fully automated, no‑human‑in‑loop support as reckless, especially where money, access, or safety are involved.
  • A recurring view: LLMs are acceptable as triage/suggestion tools whose output is vetted by humans; they’re not ready to be autonomous decision‑makers.
  • Others counter that human support is often bad too; AI just mirrors existing organizational indifference to correctness.

Hallucinations, fabrication, and “bullshit”

  • Debate over terminology: some say “hallucination” is too soft and anthropomorphic; “fabrication” or “bullshit” (in the Frankfurt sense: indifferent to truth) is more accurate.
  • Several note that LLMs will confidently invent policies, APIs, or legal precedents because they optimize for plausibility, not truth—dangerous when users assume factual authority.

Cursor product, business model, and support culture

  • Mixed views on Cursor itself: many praise its fast, high‑quality code completions and claim sizable productivity gains; others describe it as buggy, resource‑hungry, and unreliable on larger or non‑TS codebases.
  • Concerns raised about: poor or absent human support, a largely ignored GitHub issue tracker, reliance on a VS Code fork, and potential violation of Microsoft extension licenses.
  • Alternatives mentioned include Zed, Windsurf, Cline/Roo, Aider, Claude Code CLI, and plain VS Code + Copilot.

PR, moderation, and trust

  • The original Reddit thread was locked/removed after a developer comment; many interpret this as clumsy damage control, invoking the “Streisand effect.”
  • Some say using unlabeled AI personas in support and then quietly nuking critical threads erodes trust more than the initial bug itself.