I was banned from Claude for scaffolding a Claude.md file?

What actually happened / technical setup

  • Many readers found the post confusing, especially the “disabled organization / non-disabled organization” joke and the project description.
  • Reconstructed consensus: the author used one Claude instance (“A”) to iteratively rewrite a CLAUDE.md file that guided another Claude instance (“B”) in a project scaffold. When B made mistakes, A updated CLAUDE.md to prevent repeats.
  • Some thought this was “circular prompt injection” or “Claudes talking to Claudes”; others clarified the human was still in the loop and there was no direct agent-to-agent feedback loop.
  • The author speculates the ban came from safety heuristics triggered by that setup and all‑caps instructions in the generated CLAUDE.md, but openly admits it’s a guess. No confirmation from Anthropic.

Automated bans, black-box moderation, and risk

  • Multiple commenters report being banned by Anthropic (and other AI providers) after very minimal or seemingly benign usage (first prompt, VPN use, using Gemini CLI + Claude, sci‑fi recommendations, etc.), often with no clear reason and no effective appeal.
  • Some suspect heuristics around prompt injection, self-modification loops, “knowledge distillation” (system prompt language echo), or short feedback loops where Claude output is systematically re‑fed to Claude. Others think the ban may be unrelated to the last action.
  • There is strong frustration with opaque, automated “risk departments” that ban first and never explain, with comparisons to Stripe/Google account nukes.

Customer support and product behavior

  • Many complain Anthropic’s support is effectively non-existent: Fin bot gatekeeping, appeals ignored or extremely slow, GitHub issues auto-closed, harsh Discord moderation.
  • A few report good experiences or say enterprise customers do get human attention; others argue small accounts are simply not worth the support cost.
  • Several users report recent instability in Claude desktop/web/Code (hangs, content filter false positives, quota spikes, conversation stalls), reinforcing distrust.

Dependence on proprietary LLMs & alternatives

  • Thread-wide concern: if frontier LLMs become required tools for knowledge work, opaque bans could effectively eject people from the workforce or from key platforms (email, photos, phone OS if it were Google/Microsoft).
  • Many advocate model-agnostic tooling and local/open-weight models (Qwen, GLM 4.7, Mistral, etc.), despite acknowledging they’re still behind Opus/Sonnet in capability, especially for complex coding/agentic tasks.
  • Tools like OpenCode, OpenHands, aider, and CLI setups with cloud OSS models are discussed as safer, portable alternatives.

Regulation, capitalism, and speech norms

  • Strong calls for laws requiring platforms to: state precise ban reasons, retain evidence, and offer real appeals; EU GDPR/DSA are mentioned but seen as limited in practice.
  • Debate over whether “late capitalism” is to blame versus lack of regulation/enforcement.
  • Some see safety systems (e.g., bans for swearing or “unsafe” prompts) as early steps toward broader behavior control; others focus more on corporate incentives and cost of support.