We Will Not Be Divided
Background: Anthropic vs. Department of War
- Thread assumes knowledge of Anthropic’s two “red lines”: no domestic mass surveillance and no fully autonomous weapons (no human in the kill loop).
- Government response: threats to invoke the Defense Production Act or label Anthropic a “supply chain risk,” which would effectively bar not only DoD but contractors and suppliers from using Anthropic at all.
- Commenters see this as unprecedented punishment normally reserved for foreign adversaries, not a domestic company.
Reactions to Anthropic’s Stance
- Many see Anthropic’s refusal as unusually brave and morally grounded, potentially inspiring others (“courage is contagious”).
- Others are cynical: Anthropic already allows semi‑autonomous military use and only objects to current safety levels or domestic scope; this could be “incredible marketing” rather than deep principle.
- Non‑US commenters resent that protections are explicitly framed as “domestic,” implying foreign populations are fair targets.
Employee Letter & Worker Power
- The open letter from Google and OpenAI employees is praised as a rare, public moral stand in big tech.
- Critics call it “toothless hope”: no explicit commitment to strike or resign; mostly anonymous signatures; real leverage would be unions, coordinated walkouts, or mass departures.
- Some argue even if AI companies are hostile to labor, workers should still push them to resist even worse government uses.
OpenAI’s Deal with the Pentagon
- Shortly after the letter, OpenAI announced an agreement to deploy models on a classified DoW network, claiming alignment with the same high‑level principles as Anthropic.
- Many commenters don’t believe the equivalence: they suspect either quiet concessions, a legal “all lawful uses” fudge, or outright PR spin.
- OpenAI leadership is widely portrayed as untrustworthy and opportunistic; some view this as a de facto government‑backed bailout and competitive strike against Anthropic.
Government Power, Law, and Authoritarian Drift
- Strong concern that “supply chain risk” is being weaponized as political retaliation, not genuine security policy, with implications for any company that defies the administration.
- Defense Production Act is debated: some say compelling AI firms is legally straightforward; others stress it has not been formally invoked and that moral vetoes by private actors should still matter.
- Several see this as part of a broader authoritarian pattern: loyalty tests, corporatism, and erosion of norms about private autonomy and rule of law.
AI, Openness, and Weaponization
- One camp argues gating AI only guarantees eventual state seizure; therefore everything (models, code, research) should be open so power is diffused and not monopolized by governments or a few firms.
- Others counter that unconstrained powerful AI, especially in biology, makes catastrophic misuse (e.g. engineered pandemics) far easier than defense; openness could be disastrous.
- There is resignation that AI will inevitably be used for war by someone (US, China, others); the debate is whether democracies can or should draw stricter lines than their adversaries.
Broader Implications
- Foreign and US commentators say this episode will further damage global trust in US tech as reliable infrastructure; companies may look harder at non‑US AI and hardware ecosystems.
- Some call out the hypocrisy of tech workers: they profited from surveillance capitalism and labor displacement, but only draw a “red line” at explicit spying and killing.
- Others insist that imperfect actors taking a stand on specific abuses is still valuable, and that refusing “domestic mass surveillance + autonomous killing” is a meaningful, if limited, boundary.