Stop selling “unlimited”, when you mean “until we change our minds”

Anthropic’s New Limits & User Reactions

  • Claude Max/Code users report hitting new weekly/time-based limits and feeling “rug-pulled,” especially those who used the tool heavily for coding or research.
  • Several canceled their plans after realizing they mostly needed light editing/search and didn’t want to worry about invisible caps.
  • Some describe discovering cancellation options as difficult or “dark patterns” (e.g., buried Stripe buttons, no clear downgrade path, post‑click surprises like promo discounts).

Was It Ever “Unlimited”?

  • A major subthread insists Anthropic never marketed Max as unlimited, only as “5x/20x usage limits” over Pro; launch docs are quoted to support this.
  • Others say that in practice Max felt virtually unlimited (e.g., many Claude Code sessions 24/7), and that users understandably internalized it that way.
  • Multiple comments highlight the general industry pattern: “unlimited” or very high/unclear limits early, then nerfs once usage and costs spike.

Pricing Models, Abuse, and Fairness

  • Some defend Anthropic: a tiny fraction of users (or resellers) allegedly ran models 24/7 or gamified usage leaderboards, forcing tighter caps.
  • Others reject blaming “bad users,” arguing Anthropic should have anticipated heavy usage and that changing terms mid‑stream is a de facto bait‑and‑switch, even if legally TOS‑compliant.
  • Many advocate transparent, metered, per‑token pricing with visible counters and rollover instead of opaque “unlimited/higher limits” subscriptions.
  • Counterpoint: flat fees remain attractive for budgeting; heavy users can already switch to API pay‑as‑you‑go, albeit at far higher real cost.

Trust, Dark Patterns, and Legality

  • Complaints include: hidden VAT/fees, auto‑upgrades without clear final pricing, difficulty canceling, and vague “fair use” language that can be tightened later.
  • Some see this as standard SaaS/VC behavior: subsidize growth with unsustainable deals, then tighten once users are locked into workflows.
  • Others argue that everything is always “until we change our minds” unless contractually fixed; the core issue is poor communication and opacity, not change per se.

Alternatives, Moats, and Local Models

  • Users discuss moving to Gemini, OpenAI, or cheaper/open models (Qwen, etc.), but note quality gaps and switching costs.
  • Speculation that future “memory” and proprietary embeddings could create strong lock‑in if not portable.
  • Several call for better local/open‑weight LLMs to escape recurring pricing shocks from centralized providers.

Side Debate: AI Tools and Developer Productivity

  • Lengthy tangent on whether “developers using AI will replace those who don’t.”
  • One side: AI is like IDEs/version control—powerful cognitive augmentation; refusing it will be career‑limiting for most.
  • Other side: LLMs are unreliable, encourage dependency, and don’t help much on novel/underdocumented work; good engineers can remain competitive without them, and long‑term effects (economics, environment, skills) are unclear.