Meta says it won't sign Europe AI agreement
Meta’s refusal and what it signals
- Many see Meta’s refusal as a heuristic that the code is probably good; others warn this is just bias and insist on reading the text first.
- Meta frames the Code as “growth‑stunting overreach” that will throttle frontier models and EU startups; critics see this as lobbying spin from a company with a long history of privacy abuses.
- Some argue Meta has also contributed positively via open‑source AI and tooling, so its position can’t be dismissed outright.
OpenAI contrast and “regulation as moat”
- OpenAI has committed to signing and is portrayed as very pro‑regulation, partly due to deep government and military ties.
- Several commenters think the biggest incumbent backing regulation is classic “pull up the ladder” behavior, using compliance cost as a moat.
- Others simply don’t trust OpenAI’s public commitments, citing previous reversals on openness.
Copyright, training data, and responsibility
- Strong focus on Chapter 2: copyright and training.
- US: recent pretrial rulings treat training on copyrighted text as fair use, but that is contested and may be appealed; acquisition (piracy vs bulk buying/scanning) is still a separate legal issue.
- EU: no broad “fair use”; member states have narrower exceptions and different doctrines.
- The Code/Act:
- Allows training on copyrighted works (with opt‑outs) but expects “reasonable measures” to avoid infringing outputs and overfitting.
- Suggests providers prohibit infringing use in T&Cs or, for open models, warn about it in documentation.
- Debate over whether holding model providers partly responsible for downstream misuse is workable, especially for open source.
EU regulation, GDPR, and cookies as precedent
- One camp: the Code is onerous, technocratic, and written by people who don’t understand AI; likely to entrench incumbents and lawyers, as with GDPR.
- Other camp: most provisions are “common sense” (transparency, safety, user choice) and needed because large firms won’t self‑police.
- Cookie banners are a huge flashpoint:
- Critics say they show EU’s failure to foresee real‑world behavior, leading to dark‑pattern consent theatre with little real privacy gain.
- Defenders blame companies and ad networks for malicious compliance; argue GDPR enabled data‑access/deletion rights and could work if enforced properly and if sites stopped unnecessary tracking.
Innovation, competitiveness, and “keeping up”
- Concern that threshold‑based rules (e.g., model scale) will freeze EU startups below those levels while US/China firms race ahead, then enter Europe with stronger products and big legal budgets.
- Others reply that slightly weaker or slower models are acceptable if that buys more accountability and reduces power concentration.
- Some fear Europe is repeating a pattern: heavy regulation, weak local champions, dependence on US/Chinese tech; others welcome fines and constraints on foreign megacorps even if it means fewer domestic giants.
Voluntary Code of Practice vs future law
- The Code is described as a voluntary, EU‑endorsed self‑regulation step ahead of binding rules.
- Skeptics call it empty virtue signaling that only PR‑sensitive players will follow.
- Supporters say it’s a sandbox: lets companies trial obligations, refine them based on reality, and avoid a sudden cliff when they become hard law.
AI risk, timing, and philosophy of regulation
- One side: early AI regulation is premature and likely to misfire; regulators rarely predict markets correctly and often protect entrenched interests.
- Other side: waiting until harms fully materialize (pricing discrimination, autonomous weapons, mass surveillance, job displacement) is too late; the whole point is to shape the market now.
- Broader tension runs through the thread: trust in democratic regulation vs fear of bureaucratic overreach and Europe “self‑sabotaging” its tech future.