Dario Amodei calls OpenAI’s messaging around military deal ‘straight up lies’
Anthropic vs OpenAI on Pentagon Deal
- Many commenters see a clear divergence: Anthropic refused Pentagon terms over two “red lines” (domestic mass surveillance and fully autonomous weapons), while OpenAI accepted a deal described publicly as allowing “all lawful use” with a “safety layer.”
- Several argue OpenAI’s conditions are effectively “DoW won’t break its own rules,” which, given executive flexibility and secret FISA courts, is viewed as a blank check.
- The leaked internal memo from Dario Amodei characterizes OpenAI’s safeguards and Pentagon/Palantir “safety layers” as mostly “safety theater” that placate employees rather than prevent abuse.
- Commenters note the Pentagon reportedly rejected similar safeguards from Anthropic, then accepted a deal with OpenAI, which many interpret as evidence the terms are substantively weaker.
Palantir, Surveillance, and Accusations of Hypocrisy
- A major thread questions Anthropic’s moral stance given its partnership with Palantir, widely associated with government surveillance, ICE targeting tools, and “dragnet” data fusion.
- Defenders say Anthropic imposed contractual limits (no domestic surveillance, disinformation, weapons, etc.) and that Palantir “just” integrates data rather than collecting it, though others call this a distinction without a difference.
- Critics argue that facilitating foreign/intelligence surveillance while objecting to Pentagon surveillance of US citizens is an ethically thin line, and practically hard to enforce.
Politics, Power, and Motivation
- Multiple comments allege the Trump administration is punishing Anthropic for not donating or “playing ball,” while rewarding OpenAI leadership that did.
- Some see Anthropic’s stand as both ethical and strategic: sacrificing a ~$200M contract to strengthen recruiting, brand, and long‑term trust, especially among safety‑minded researchers.
- Others think both labs are doing “safety theater” under intense financial pressure to secure massive government AI budgets.
Autonomous Weapons and Mass Surveillance Concerns
- Debate over what “fully autonomous weapons” means: most agree it’s systems that select and fire on targets without human approval, e.g., loitering munitions that decide whom to kill.
- Commenters highlight that mass surveillance is largely legal today; “all lawful use” is seen as dangerous when laws and secret courts can be reshaped to permit very broad monitoring.
Community Reactions and Alternatives
- Some users report canceling ChatGPT subscriptions, switching to Claude, DeepSeek, or local models; others distrust all major labs.
- There is skepticism about putting Anthropic “on a pedestal,” especially given reports they are back in talks with the Pentagon and their past work with Palantir.