AI systems with 'unacceptable risk' are now banned in the EU

Definition and Scope of “AI Systems”

  • Commenters highlight that the Act’s definition (“machine-based system… varying autonomy… may adapt… infers outputs that influence environments”) is broad and can cover many ML/statistical systems, not just LLMs.
  • Debate over whether this effectively includes “just software” (e.g., chess engines, A*, weather prediction, thermostats). Some argue the key is autonomy/adaptiveness and non‑transparent inference; simple rule‑based systems remain outside via recitals.
  • There is uncertainty about edge cases: generated code, rate limiters, learning thermostats, or avalanche‑risk predictors, especially when they affect life‑or‑death or safety‑critical decisions.

Subliminal Manipulation, Advertising, and Carve‑outs

  • The legal text targets systems that intentionally use subliminal or purposefully manipulative techniques that materially distort behavior and cause likely significant harm.
  • Another recital explicitly excludes “common and legitimate commercial practices” like standard advertising that complies with existing law.
  • Some see this as sensible, targeting malicious dark‑pattern systems rather than all personalization. Others see it as poor drafting: if ads need explicit carve‑outs, the law is either too broad or mis‑identifying the actual harms.
  • There is recurring frustration that cookie banners and adtech abuses were blamed on GDPR, even though GDPR doesn’t mandate banners and already restricts tracking; banners are viewed as dark‑pattern “malicious compliance.”

Crime Prediction, Biometrics, and Law Enforcement Exceptions

  • The article’s line “predict people committing crimes based on their appearance” is called misleading. The Act bans risk assessments based solely on profiling or personality traits/characteristics without objective facts and human assessment.
  • Real‑time biometric surveillance, facial recognition scraping, and biometric inference of traits (e.g., sexual orientation) are in the prohibited list, with narrow exceptions (e.g., targeted searches, imminent threats) under authorization.
  • Some fear governments and security services will in practice be exempt or evade scrutiny (e.g., via “national security”), citing past predictive systems and surveillance scandals.
  • Others note similar GDPR language has still allowed courts to strike down unlawful surveillance, and argue this at least sets enforceable boundaries.

Emotion Inference and Workplace/School Uses

  • AI systems to infer emotions in workplaces and education are banned, with exceptions for medical or safety purposes.
  • Supporters worry about “smile scoring” and emotional surveillance being used to evaluate workers or students.
  • Some suggest beneficial uses (e.g., tools assisting autistic people, stress‑detection wearables); the medical/safety exception is seen as covering many of these, but concerns remain about coerced or misused “wellness” monitoring.

Open Models, Providers vs Deployers, and Research Exemptions

  • The regulation targets placing a system on the market or putting it into service, not merely publishing weights or doing research.
  • Open‑weight models can technically enable banned applications, but liability attaches to whoever deploys them in a prohibited or high‑risk context.
  • Research, development, and prototyping before market release, and systems designed exclusively for defense/national security, are explicitly exempt.

Risk Tiers: “Unacceptable” vs “High Risk”

  • Many note the “unacceptable risk” list is relatively narrow: social scoring, manipulative systems, certain biometric/face‑rec tasks, crime prediction based solely on profiling, and emotion inference at work/school.
  • The broader impact lies in the “high‑risk” category (safety‑critical, fundamental rights, access to essential services), which triggers risk‑management, documentation, and oversight rather than a ban.
  • Supporters see this as analogous to product safety regimes: stricter controls where AI decisions directly affect rights, health, or livelihood.

Innovation, Competitiveness, and Regulatory Burden

  • Critics argue vague, hard‑to‑interpret laws with large fines chill investment, entrench incumbents, and push startups and military/dual‑use AI out of the EU, repeating perceived GDPR side‑effects on SMEs.
  • Supporters counter that:
    • The main targets are already illegal or heavily regulated when done by humans (e.g., discriminatory scoring, opaque denial of rights).
    • Slowing or steering harmful uses is preferable to “winning” an AI arms race.
    • GDPR and similar rules have improved privacy despite poor implementations like cookie banners.

Enforcement, Ambiguity, and Cultural Divide

  • Several participants stress that, as with GDPR, much will depend on case law; early years will be dominated by “we don’t know yet” from lawyers and regulators.
  • There’s concern about “peasant‑trap” dynamics: big firms absorb compliance costs and litigate; small projects self‑censor or shut down due to fear of multimillion‑euro penalties.
  • Discussion repeatedly contrasts EU’s rights‑first, data‑as‑individual‑property mindset with US companies’ data‑as‑asset model, and whether citizens should be allowed to “pay with data” or whether that undermines fundamental rights.