OpenAI asks White House for relief from state AI rules
Regulatory strategy and federal preemption
- OpenAI’s proposal is described as asking the White House to preempt state AI laws if companies voluntarily submit models to a federal AI Safety Institute.
- Many see this as “regulatory capture”: OpenAI previously pushed for strong regulation, and now seeks exemptions or a centralized regime it can better influence.
- Others argue a patchwork of state rules (especially from California) could make US AI unworkable and simply push usage to other jurisdictions or VPNs.
- There’s debate over whether the White House can meaningfully preempt state law without new Congressional action, and concern about growing executive power used “under color of law.”
Copyright, “freedom to learn,” and training data
- A huge subthread disputes OpenAI’s call for a “copyright strategy that promotes the freedom to learn,” i.e., preserving the ability to train on copyrighted material.
- One side argues: humans can read books, internalize knowledge, and create derivative works; models should be treated analogously, and current copyright tools are ill-suited to ML.
- The other side stresses acquisition: AI firms scraped or torrented vast amounts of paywalled or pirated books, music, and code without licenses, unlike individuals who must buy or borrow.
- Multiple examples (libgen, Books3, music datasets) are cited to argue this is not just “reading the open web” but systematic infringement at industrial scale.
- There’s strong resentment that individuals were aggressively prosecuted for minor piracy while AI firms doing the same at massive scale seek retroactive legal blessing.
Proposed fixes and their problems
- Ideas floated: a new “ML training right,” Spotify‑style per‑use royalties, influence-analysis to apportion payments, or blanket levies on AI usage.
- Others note huge practical issues: tracing which works influence which outputs, gaming of royalty systems, and the dominance of large corporate rights-holders over individual creators.
- Some advocate shortening copyright terms generally; others say first fix overlong duration, then debate AI‑specific carve‑outs.
US vs China and national security framing
- OpenAI and allies frame lenient US rules as needed to maintain an AI lead over China, whose companies allegedly ignore Western IP and whose models must follow “socialist values.”
- Critics see this as opportunistic: national security rhetoric replacing earlier “AI safety” arguments to justify special treatment and bans on PRC-produced or open‑weights competitors like DeepSeek.
- Some worry that relaxing copyright only for AI will invite reciprocal erosion of US IP abroad and further empower large US tech firms, not working artists.
Centralization, corporate power, and democracy
- The thread repeatedly broadens into concerns about centralized power: federal vs state, corporations vs creators, and tech vs democratic oversight.
- Examples from other domains (food safety, bribery laws, driverless cars) are used to argue that “no guardrails” is unrealistic once technologies affect life, safety, and labor at scale.
- Others counter that overregulation, particularly around training data, may simply shift innovation offshore and be impossible to enforce technically.
Open vs closed AI and competitive landscape
- DeepSeek, Meta’s open LLMs, and synthetic‑data training are seen as having “shaken” OpenAI and undercut its narrative that only a few well‑regulated US giants can safely build advanced models.
- Some believe OpenAI still has a massive moat (compute, brand, enterprise deals); others say its business is fragile and this push is about building a regulatory moat against open-source and foreign rivals.