Anthropic says OpenClaw-style Claude CLI usage is allowed again

Policy confusion around CLI and OpenClaw

  • Multiple commenters find Anthropic’s stance on Claude CLI / OpenClaw use opaque and shifting.
  • Distinction discussed between:
    • Using OAuth credentials directly against Anthropic’s API or third‑party harnesses (said to incur “extra usage” and previously risk bans).
    • Running tools inside Claude Code CLI via claude -p (described as allowed in principle, but at times blocked by classifiers and/or billed as extra usage).
  • OpenClaw maintainers report Anthropic staff verbally OK’ing “CLI-style” usage, yet parts of OpenClaw’s system prompts were still blocked until recently, suggesting misalignment between public statements, classifiers, and ToS.
  • Several note this announcement comes via OpenClaw, not an official Anthropic channel, and consider it unreliable or “hearsay.”

Trust, reliability, and communication

  • Many say repeated changes to limits, policies, and unofficial “clarifications” (often via scattered tweets) have eroded trust.
  • Some accounts were disabled for “suspicious signals” with no clear recourse; others later found their accounts silently re‑enabled.
  • Users fear future rug pulls and see the environment as too unstable for critical workflows.

Pricing, limits, and economics

  • Complaints about reduced subscription limits, rising prices, and the opaque split between “plan limits” and “extra usage.”
  • Some view CLI/OpenClaw billing as effectively “API with extra steps.”
  • Debate on whether LLM providers are currently losing money on inference and how sustainable low consumer prices are.

Harnesses vs models / product strategy

  • Strong sentiment that the “harness” (Claude Code, Codex, OpenClaw, pi, etc.) is where much of the value lies.
  • Tension identified: Anthropic seems torn between maximizing Claude Code usage vs being a neutral model platform, leading to policy whiplash.
  • Several argue Anthropic should publish clear developer policies and simple rate limits tied to OAuth, rather than vague, case‑by‑case enforcement.

Switching behavior and model comparisons

  • Numerous users report canceling Claude plans (Pro, Max, 20x, 5x) and moving to OpenAI Codex, Chinese models (GLM, Z.ai, MiniMax, Kimi), or local models.
  • Opinions on model quality are split: some find Anthropic better at clean, readable code; others say GPT‑5.3/5.4 Codex is consistently stronger for implementation and planning.
  • Many say frontier models are now close in quality; harness UX and predictable pricing drive provider choice more than raw model differences.

Workarounds and setups

  • People describe using claude -p scripts, tmux, Telegram bots, MCP Channels, local‑cloud hybrids, and custom agents to stay within perceived rules while maximizing subscription value.
  • It remains unclear which of these patterns Anthropic fully endorses long‑term.