Our Agreement with the Department of War

Contract language and “all lawful purposes”

  • Central debate is over the clause allowing DoD use of OpenAI systems “for all lawful purposes.”
  • Many see this as effectively “use for anything,” since the executive can reinterpret, secretly stretch, or ignore law, and can change internal policies and directives.
  • Others argue it’s at least an objective contract standard (law as written), better than nothing, but still weak in practice.

Comparison with Anthropic and morals vs law

  • Thread repeatedly contrasts OpenAI (accepting “all lawful purposes”) with Anthropic (wanted explicit red lines on autonomous weapons, mass surveillance, and real-time veto power).
  • One camp: Anthropic was “imposing its own morals” inappropriately on the military.
  • Opposing camp: A company is entitled—and morally obliged—to refuse uses it considers unethical, even if technically legal; Anthropic’s stand is praised as rare corporate backbone.

Autonomous weapons and human-in-the-loop language

  • The condition “no independent direction of autonomous weapons where law or policy requires human control” is seen as hollow: policy can be rewritten; “human in the loop” can degenerate into rubber‑stamping.
  • The “can’t power fully autonomous weapons because it’s cloud, not edge” claim is widely ridiculed as technical sleight of hand.

Surveillance, “domestic” qualifier, and data buying

  • The contract’s promise not to enable domestic mass surveillance is read as permitting large‑scale monitoring of foreigners and possibly Americans via third‑party data purchased from private brokers.
  • Several note the US government’s history of warrantless surveillance and secret legal memos as proof that “complies with the Fourth Amendment / FISA / EO 12333” is not reassuring.

Trust in OpenAI and leadership

  • Long arc: from nonprofit “open” safety lab to closed, profit‑maximizing defense contractor; many see a pattern of self‑imposed guardrails being abandoned when lucrative.
  • Altman is widely described as untrustworthy and opportunistic; the failed board coup is retrospectively framed as prescient.
  • Some commenters view this as equivalent in spirit to earlier tech–military entanglements (e.g., IBM in the 1930s).

Employees, users, and corporate power

  • A number of users report canceling OpenAI subscriptions and switching to alternatives, partly to “send a signal,” though some doubt this will materially matter given potential government money.
  • Calls for OpenAI employees with financial freedom to quit; suggestions that only mass resignations or unions could meaningfully constrain such decisions.
  • Broader worry: a trajectory where under‑regulated private AI firms become key arms suppliers in an increasingly unaccountable security state.