Anthropic co-founder on cutting access to Windsurf

Platform risk and trust

  • Many see this as another reminder that building workflows or products on top of proprietary AI APIs is risky: acquisitions, policy changes, or capacity shifts can break critical tools overnight.
  • Comparisons are made to long-standing “shell games” in enterprise software and earlier episodes like Google deprecating popular APIs.
  • Some commenters conclude Anthropic and OpenAI (and possibly others) are fundamentally untrustworthy as infrastructure providers; others say this is just normal business reality.

Was Anthropic’s move reasonable?

  • One camp: It’s obviously reasonable not to give favorable, high-volume access to a direct competitor’s product (Windsurf now being part of OpenAI). Customers can still “bring their own key” and use Claude, so this is just the end of special treatment.
  • Opposing view: This demonstrates Anthropic is an unreliable vendor that can cut off access whenever a customer becomes strategically inconvenient. Some worry about antitrust or “anti‑competitive” behavior, though others argue this is not illegal or even clearly anticompetitive.

Analogies and vertical layers

  • Analogies used: bakeries and bread resellers, Costco pizza resale, SpaceX launching competitor satellites, Apple limiting features to iOS.
  • Debate centers on whether model makers (level 1), infra providers (level 2), and app/tool builders (level 3) should be able to easily cut one another off, and whether that destroys trust in the ecosystem.

Economics of LLM APIs

  • Disagreement over whether model APIs are low-margin or even negative-margin.
  • Some argue per‑token APIs have strong unit economics and that “loss-leading” inference at scale makes no sense given compute scarcity.
  • Others note high training and staffing costs and say it’s still unclear if frontier labs can sustain high margins.
  • A subthread debates scale efficiencies, batching, custom hardware, and whether large providers can turn today’s marginal economics into tomorrow’s profit engine.

Impact on developers and tooling

  • Concern that any app built on top of a model provider can become a future target if it drifts into the provider’s product space (e.g., coding assistants vs. “Claude Code”).
  • Some insist this risk is similar to any SaaS dependency; others emphasise that LLM providers can yank a core capability, not just a convenience feature.
  • Several commenters advocate hedging with open-source tools and self‑hosted or pluggable setups (e.g., Aider, Cline, Void, local models), even at some quality or cost penalty.
  • Expectation that we are entering an era of aggressive LLM monetization and more overtly anti‑competitive moves, with higher prices and less “it just works” stability.