Google drops pledge not to use AI for weapons or surveillance

Megacorps, Markets, and Morality

  • Many see Google’s change as a predictable “mask off” moment: megacorps are driven by profit, not ethics; any moral stance lasts only until it conflicts with revenue.
  • Discussion splits between calling corporations “amoral” machines (Moloch, “sentient piles of money”) vs “evil” because they systemically prioritize profit over human welfare.
  • Others stress corporations are made of people, so responsibility ultimately lies with executives, employees, shareholders, and even customers who keep buying and investing.
  • Several comments frame this as a shift from value creation to value extraction (ads, data harvesting, licensing, “technofeudalism”) where morality only matters if it supports growth.

Pledges, PR, and Legal Risk

  • Corporate “pledges” are widely viewed as marketing: costless to adopt, discarded once inconvenient.
  • Some speculate they were dropped now to avoid securities-fraud exposure if Google publicly promises constraints and then secretly ignores them.
  • Others note that openly removing the pledge is at least more honest than quietly violating it, and should be treated as a clear signal of strategic direction.
  • This is linked to Google’s earlier “Don’t be evil” motto and its removal, reinforcing a long-run pattern of retreat from explicit ethical commitments.

Surveillance, Data, and the State

  • A recurring thread: advertising and “free” services justified mass data collection, which dovetails naturally with government and intelligence interests.
  • Past programs (like PRISM and telco wiretap rooms) are cited as precedent; some argue Big Tech functions as a quasi-government surveillance arm.
  • LLMs are seen as a qualitative shift: they can potentially ingest all of a person’s digital exhaust and infer sentiments, violations, and “loyalty” at scale.
  • Others push back that large-scale automated monitoring already existed, and costs can be managed via cascades of smaller models.

AI Weapons, Arms Races, and Ethics

  • One camp: given adversaries’ advances (drones, AI targeting in current wars), democracies “must” build AI weapons to maintain deterrence; refusing is framed as unilateral disarmament.
  • Another camp points to historical arms control (chemical, biological, landmines) and argues specific AI uses—especially fully autonomous kill systems—should be taboo.
  • Examples like autonomous/AI-assisted targeting in Gaza and drone warfare in Ukraine are cited as early, troubling precedents that already blur “defense” vs “mass killing”.
  • There is concern about domestic use: tools developed “for national security” will eventually be turned inward for surveillance, repression, and crowd control.

Mixing Consumer Platforms and War

  • Many see special danger in the same company handling global email, maps, phones, and video while also building weapons and surveillance tools.
  • Proposed mitigations include strict separation between defense work and consumer divisions, or legal restrictions on using user data in military systems.
  • Skeptics argue any such separation will be cosmetic: corporate structures can be rearranged, but capabilities and incentives remain shared.

Personal Responsibility and Complicity

  • Some argue that if a company is doing evil, employees and users who stay are complicit; others reply that systemic incentives, not individual morality, dominate outcomes.
  • Past internal protests (e.g., against military or controversial government contracts) and subsequent firings are referenced as evidence that employee resistance has limited impact.
  • Several comments liken current tech–military collaboration to historical corporate complicity in atrocities, warning this is how “it happens” in real time.