Gavin Newsom vetoes SB 1047

What SB 1047 Aimed To Do

  • Target only “frontier” models above very high training-cost and FLOP thresholds (well beyond current GPT‑4, according to some).
  • Require “reasonable care” to avoid “critical harm” (defined as mass-casualty WMD use, large cyberattacks on critical infrastructure, or autonomous criminal conduct causing death or ≥$500M damage).
  • Exclude harms from information that’s already reasonably publicly accessible.
  • Create a “Board of Frontier Models” and mandate shutdown / safety controls and audits for covered models.

Governor’s Stated Reasons for Veto

  • Argues model-size/cost is a poor proxy for risk; smaller specialized systems could be as dangerous.
  • Says bill ignores deployment context (high‑risk vs trivial uses) and could impose heavy requirements on benign uses inside big systems.
  • Warns it could give a false sense of security while curtailing innovation.
  • Calls for evidence‑based, risk‑focused regulation and coordination with federal efforts, not this specific framework.

Arguments Supporting the Veto

  • Bill seen as overbroad, vague (“reasonable care,” “unreasonable risk”) and highly litigable.
  • Some say it regulates models instead of the real problem: how AI systems are integrated into safety‑critical domains.
  • Concern it would raise barriers to entry and entrench incumbents; some call it an attack on open models and small high‑budget startups.
  • Fear California would drive AI R&D to other states or countries, similar to complaints about EU tech regulation.

Arguments Criticizing the Veto

  • Supporters view SB 1047 as a narrow, first step focused only on extreme harms (WMDs, catastrophic cyberattacks), not ordinary accidents or individual deaths.
  • Some AI researchers and safety advocates are cited as backing the bill; critics say veto delays needed guardrails while capabilities advance quickly.
  • Others argue that if models could ever enable such harms, strong pre‑deployment liability and shutdown requirements are exactly what’s needed.

Broader AI Risk & Regulatory Debates

  • Deep split between those who fear near‑term AGI/X‑risk (superintelligence, rogue agents, bio/ cyber‑weapons) and those who see this as sci‑fi distraction from more concrete issues (misuse, automation harms, fraud, ad‑tech, privacy).
  • Disagreement over whether open‑weights are inherently more dangerous or essential for safety and competition.
  • Some see the entire fight as early-stage regulatory capture and political theater; others see it as a serious but imperfect attempt to grapple with unprecedented risks.