We do not think Anthropic should be designated as a supply chain risk
Perceived Optics and PR Response
- Many see OpenAI’s statement as pure damage control after public backlash and possible subscription cancellations, not a principled stand.
- Commenters highlight the contradiction: publicly defending Anthropic while simultaneously accepting the lucrative contract Anthropic rejected.
- Several argue this makes OpenAI look hypocritical and further damages its brand among developers, even if mainstream users quickly forget.
Contract Terms: “Red Lines” vs. “All Lawful Use”
- Core distinction raised:
- Anthropic reportedly insisted on explicit contractual bans on mass surveillance and fully autonomous weapons, independent of what is currently legal.
- OpenAI’s DoD/“DoW” agreement, as quoted in the thread, is framed as “all lawful purposes,” with carveouts that defer to existing law, regulations, and department policy.
- Critics say this effectively means “you can do anything you decide is legal,” making the clauses a non-constraint; supporters counter that contracts tied to law still offer some leverage.
- Some argue Anthropic wanted technical and contractual enforcement (kill switches, usage constraints), while OpenAI relies on legal terms and its own model “guardrails.”
Allegations of Political Influence and Corruption
- Multiple comments link the outcome to large pro‑Trump donations from OpenAI leadership and note longstanding ties to influential political and business figures.
- Hypothesis: the “supply chain risk” label is retaliation for Anthropic publicly challenging the administration and a reward for OpenAI’s alignment and donations. This is widely asserted but acknowledged as unproven.
Ethics, Employee Responsibility, and Boycotts
- Strong view that any AI firm working with this administration on military/intelligence use is “profoundly compromised,” especially given existing surveillance abuses.
- Some say OpenAI staff who stay are complicit; others argue employees need income but are reminded that OpenAI compensation is high and alternatives exist.
- A noticeable number report canceling ChatGPT subscriptions and moving to Claude, though everyone agrees real usage data is unavailable and impact unclear.
AI in Warfare and Mass Surveillance
- Commenters describe how LLMs, combined with transcription and sensor data, could scale mass surveillance, targeting, and paperwork generation for drone strikes or repression—even if not embedded directly in weapons.
- Others argue traditional ML and rule-based systems are better suited than LLMs for many of these tasks, and see some of the rhetoric as overstating LLM centrality.
Anthropic’s Role: Principled or Performative?
- One camp views Anthropic as taking a rare, costly ethical stand that sets a higher bar and should have been jointly supported by major labs.
- Another camp sees this as strategic branding: refusing one contract while still enabling military/intel uses (including via Palantir) and attracting “safety‑minded” talent that accelerates capabilities anyway.
- Several note that, in practice, Anthropic’s refusal didn’t stop the capability—only shifted it to OpenAI—raising questions about the real-world effect of unilateral “red lines.”
Broader Governance and Democratic Concerns
- Deep skepticism that “all lawful use” is meaningful when the executive branch can internally reinterpret legality, often via secret memos, with little accountability.
- Some emphasize that relying on corporate ethics to constrain the state is dangerous; others argue that, given weak laws and captured institutions, private refusals like Anthropic’s are one of the few remaining checks.
- A few extrapolate to global power dynamics, warning that U.S.-controlled frontier models now look like strategic munitions and may spur other regions (e.g., Europe) to pursue sovereign AI to avoid dependency.