OpenAI agrees with Dept. of War to deploy models in their classified network
Perceived Betrayal vs. Anthropic
- Many see OpenAI’s deal as crossing a picket line immediately after Anthropic was punished for similar “red lines” (no mass domestic surveillance, no fully autonomous weapons).
- Commenters argue OpenAI is helping legitimize the government’s retaliation against Anthropic, despite prior public “solidarity” statements.
- Some believe the only way this could happen so fast is if OpenAI quietly signaled more flexibility in practice than Anthropic was willing to accept.
Contract Terms: Law vs. Vendor Red Lines
- A key distinction discussed: Anthropic insisted on its own binding constraints (and the right to judge violations), while OpenAI appears to defer to “all lawful use,” with the government itself defining what’s lawful.
- An administration official frames this as the core issue: safety constraints should derive from U.S. law and policy, not from a private CEO’s ToS.
- Several argue that “lawful” is meaningless protection when the executive can reinterpret or change laws, or rely on secret legal memos.
Wording, Loopholes, and Trust in OpenAI
- “Domestic mass surveillance” is seen as a huge loophole: it implies foreign or outsourced surveillance is fine.
- “Human responsibility for the use of force” is criticized as weaker than “human in the loop”; responsibility could be nominal and far removed from real-time control.
- Many openly say they assign near-zero credibility to OpenAI leadership’s public statements, citing a long pattern of alleged dishonesty and weasel language.
Politics, Corruption, and Power Games
- Large campaign donations and personal ties between OpenAI leadership, major cloud vendors, and the current administration are repeatedly cited as likely drivers.
- Some think Anthropic was targeted as a political enemy or a “woke” company, with OpenAI rewarded as the compliant, friendly contractor.
- Others see this as classic Trump-style spite: blow up one deal, then sign an equivalent or worse one just to demonstrate dominance.
Employee Ethics and Community Response
- OpenAI employees who signed the “We Will Not Be Divided” letter are portrayed as facing intense cognitive dissonance; many commenters say staying now is complicity.
- One self-identified employee defends staying, claiming the deal bans domestic mass surveillance and autonomous weapons; most replies call this naïve or self-interested.
- Large numbers of commenters report canceling ChatGPT subscriptions, deleting accounts, and switching to Anthropic, Claude, Gemini, or smaller European providers as a moral protest.