US and UK refuse to sign AI safety declaration at summit

Power, hegemony, and who writes the AI rules

  • Many see the refusal as a reminder that great powers with money and guns set the rules; declarations without enforcement are “pieces of paper.”
  • Some argue only “creators” of frontier AI (US, China, big labs) will truly shape norms; regions that mainly consume (EU, Japan) can write laws but lack leverage unless they can credibly deny market access.
  • Others counter that large markets like the EU do shape behavior via regulation (e.g., consumer protection, AI Act), and that economic size still matters even without foundational models.

What the declaration actually says, and why it divides people

  • Linked text is broad: accessibility, “inclusive and sustainable AI,” social justice, equitable access, bias reduction, human rights, global coordination.
  • Critics call it vague virtue signaling with no enforcement, easily signed then ignored; some say it’s only meaningful as a signal when the US/UK pointedly refuse.
  • Supporters see it as baseline: “we promise not to use AI for obviously harmful goals,” and as an attempt to embed human-rights and labor protections before deployment scales.

US/UK motives and domestic culture war

  • Several comments read the US/UK stance as: don’t slow down, dismantle bureaucracy, win the AI race first, regulate later.
  • Others see it as aligning with big business and rejecting language around equity, inclusion, social justice, and environmental goals that are now politically toxic in US right‑wing politics.
  • Some frame it as theater for domestic audiences: being obstinate with Europe and “globalists” plays well at home.

Feasibility of AI regulation and enforcement

  • One camp: you can’t meaningfully police “good vs bad AI,” training looks like any heavy compute; safety pacts are unenforceable like trying to ban math.
  • Opposing camp: large frontier runs are detectably power‑hungry, depend on a small number of fabs and data‑center operators, and can be restricted similar to nuclear or chemical controls (if major powers are willing to use sanctions, sabotage, even force).
  • There’s repeated tension between “regulate compute and high‑risk uses” vs “this inevitably becomes a tool for US‑aligned incumbents to lock in a moat.”

Near‑term harms vs AGI/doom debates

  • Many say present risks are spam, fraud, deepfakes, surveillance, biased decision systems, and labor displacement; existing “AI safety” work is seen as narrowly focused on PR guardrails.
  • A long subthread debates AGI extinction risk: some argue unaligned superintelligence is an existential threat worth global moratoria; others dismiss this as speculative “cultish” doomerism distracting from concrete political and corporate harms.
  • Several note that even if AGI is far off, states and militaries are already pursuing AI for weapons, targeting, and autonomous decision‑making, which raises its own escalation and control risks.

Economics, inequality, and geopolitical competition

  • Commenters worry unconstrained AGI will behave like a “resource curse”: elites no longer need a healthy, educated workforce, leading to durable techno‑feudalism and perfected surveillance.
  • Others are more optimistic: ubiquitous AI assistants and automation could free people from drudgery—if political systems redistribute gains.
  • Many argue any serious global slowdown is game‑theoretically unstable: states fear being left behind militarily and economically, and point to China and smaller powers as likely to press ahead regardless of Western declarations.