Anthropic revokes OpenAI's access to Claude
Licensing and Anti-Competition Clauses
- Anthropic’s “no competing model” term is seen by some as extreme; others note similar clauses are now standard for major AI providers and have long existed in software (anti-benchmarking, anti-reverse-engineering, Oracle, Microsoft, Twitter firehose, etc.).
- Some argue the analogy is weaker here because LLMs/code tools are intended as general-purpose development tools, so banning certain outputs (competing models/products) feels more intrusive than banning reverse-engineering.
- Concern: dependence on vendors whose ToS let them mine your data while reserving the right to cut you off if you become “competitive”.
Enforceability and Legal Debate
- One side: vendors can choose customers and set almost any contract term unless barred by specific law (e.g., antitrust, FRAND).
- Other side: questions whether a ban on using outputs for training is enforceable when the provider has no copyright in those outputs and when copyright law may preempt contract terms; cites split case law on similar issues.
- EU consumer law is mentioned as more hostile to surprising/after-purchase EULA terms.
Fair Use, Data, and Scraping
- Hypocrisy noted: AI companies assert “fair use” to scrape the web and ignore robots.txt, yet forbid others from using their outputs to train.
- Idea floated: pay individuals for their AI chat histories via browser extensions; responses note synthetic data gaming, cleaning costs, and limited value unless narrowly targeted to a product.
Anthropic’s Ban on OpenAI: Motives and Optics
- Some see this as straightforward enforcement of ToS: benchmarking allowed, product-building (e.g., coding tools) not.
- Others think it’s a PR move: “we’re so good OpenAI engineers used us,” and note OpenAI could try to re-access via non-official accounts, though that risks legal trouble.
- Several criticize Wired’s framing (“special developer access”) as misleading hype around normal API use.
User Experiences and Moderation
- Multiple commenters report being banned by Anthropic with little explanation and describe its moderation and web-scraping behavior as aggressive.
- Others defend Anthropic’s conservative, safety-first posture while acknowledging high false positives and poor support.
Model Style and Product Positioning
- Some users prefer ChatGPT’s direct, emotionally resonant, opinionated tone and dislike Claude’s cautious, “customer service” style; they explicitly ask OpenAI not to emulate Claude’s persona.
- Others use Claude mainly for code/research, GPT for “talking and thinking,” and worry convergence would reduce differentiation.
Broader Concerns About AI and Law
- Commenters list areas where AI companies allegedly ignore law (copyright, trademarks, defamation, harassment) and debate whether model providers should be liable for defamatory outputs despite disclaimers.