Anthropic tightens usage limits for Claude Code without telling users

Usage patterns and how limits are hit

  • Some users on $20–$100 plans hit limits in 1–3 hours, even with a single Claude Code instance and modest prompts; others on the same plans never come close and are surprised.
  • Heavy users describe:
    • Long-running “agentic” workflows with sub-agents, multi-step planning docs (hundreds of lines) and implementation phases, often costing the API-equivalent of $15–$45 per feature.
    • Multiple parallel Opus/Sonnet sessions (4–8+) running for hours, or even 24/7, on tasks like large refactors, migrations, debugging, test fixing, data analysis, etc.
    • Workflows where Claude repeatedly re-reads large files or entire folders, causing big token burn.
  • Others see frequent 429/529 errors or early downgrades to Sonnet and suspect dynamic throttling by time of day, account, or region.

Pricing, transparency, and business model

  • Many complain that limits were tightened silently, with confusing in-product messaging and no clear remaining quota indicators; some only infer costs with tools like ccusage.
  • There’s broad agreement that $100–$200 “Max” subscriptions can easily yield hundreds or thousands of dollars of API-equivalent usage, implying heavy subsidization.
  • Competing narratives:
    • “Uber/drug dealer model,” “enshittification”: underprice to hook users, then ratchet limits/prices.
    • Counterpoint: this is rational loss-leading in a space where compute costs will fall and models will get more efficient.
  • Some see flat-fee “unlimited” plans as inherently unsustainable and expect eventual convergence on metered pricing or stricter caps.

Productivity gains vs. skepticism and overreliance

  • Enthusiasts say Claude Code massively boosts throughput (often 2–3×), offloads boilerplate, accelerates learning of new patterns, and enables experimentation across stacks and domains.
  • Others find it boring, brittle, or slower than simply coding and searching themselves; several report loops, wasted tokens, or degraded code quality that then demands heavy human cleanup.
  • There’s a cultural clash around “vibe coding”:
    • Critics worry about skill atrophy and about projects becoming impossible when limits hit.
    • Supporters argue that as long as you understand and review the code, it’s a power tool, not a crutch—and that not using LLMs at all is now self-handicapping.

Lock-in, reliability, and alternatives

  • Some users now fear dependence on a third‑party tool with opaque, moving limits and compare it unfavorably to owning a keyboard or compiler.
  • Others rotate between providers (Claude, Gemini, Kimi, etc.) or look to local/open models (Llama 3.x, DeepSeek, R1 + tools like Goose/Ollama) to mitigate vendor risk.
  • Several note growing reliability and UX issues in Claude clients (slow history, cryptic errors, poor usage visibility) and ask Anthropic to prioritize stability and clearer quota communication.