Cali's AG Tells AI Companies Almost Everything They're Doing Might Be Illegal

Geopolitics and the “China argument”

  • Some argue the US cannot meaningfully crack down on Big Tech/AI because China (e.g., DeepSeek) will press ahead, and AI firms are now a strategic “golden goose.”
  • Others counter that “China won’t obey copyright” is irrelevant to what US law and ethics should permit.
  • Long subthread debates whether integrating China into global trade was a moral success (massive poverty reduction) or a strategic and economic mistake that undermined US workers.
  • Tension between nationalist “US should act only for its people” vs. cosmopolitan “global inequality reduction is good, even at some US cost.”

Copyright, data, and fair use

  • Big disagreement over whether current AI training practices are “can’t be done legally” or “don’t want to pay.”
  • One side: it’s practically impossible to license trillions of tokens from millions of rightsholders; if strict copyright is applied, AI training (and even Internet Archive–style archiving) may be illegal.
  • Other side: impossibility doesn’t excuse mass infringement; if you can’t license it, you shouldn’t use it. Let companies lobby to change copyright, not just ignore it.
  • Fair use is contested: some cite Google Books/search precedent; others point to recent Supreme Court narrowing “transformative” use and to clear market harm.
  • German law allowing data mining unless opted out via machine-readable signal is cited as one model.

What the California AG advisory actually says

  • Several commenters say Gizmodo’s framing is misleading: the memo opens by praising AI’s potential and mainly says “don’t do illegal things with AI.”
  • Core flagged risks:
    • Using AI to foster deception (deepfakes, undisclosed AI-generated media).
    • False advertising about AI accuracy/utility.
    • AI systems that cause adverse or disproportionate impacts on protected classes, reinforcing discrimination.
  • Lawyers note that “disproportionate impact” is long‑standing civil-rights language with extensive case law; the memo just applies existing standards to AI.

Deception, tools, and liability

  • Pencil analogy: some argue banning deceptive AI use is like banning pencils because they can write propaganda.
  • Others reply AI providers are active service operators, not neutral hardware vendors, and already impose output restrictions; foreseeability creates duties.

Bias, discrimination, and explainability

  • Commenters in finance/recruiting stress that black-box models for lending/hiring are already legally dangerous; firms need explainable models and model-risk management.
  • Example biases (doctor “he” vs nurse “she”) show that datasets encode social patterns; disagreement over when biased outputs become legally actionable vs. merely undesirable.

Regulation, vagueness, and rule of law

  • Some see “you might be breaking the law” messaging as dangerous vagueness enabling selective enforcement; call for clear ex-ante guidance.
  • Others respond that uncertainty is normal until courts create precedent; rule of law is about neutral adjudication, not perfect clarity.