Avoiding AI is hard – but our freedom to opt out must be protected
What “AI” Refers To
- Many comments argue the article never defines “AI” clearly, conflating:
- Longstanding machine learning in search, spam filters, spellcheck, fraud detection.
- Newer “GenAI” / LLMs used for text, images, and decision support.
- Several note that public and even technical usage of “AI” has shifted recently toward GenAI, while historically it was a marketing term or a sci‑fi trope.
Is Opting Out Even Possible?
- One camp says “opting out of AI” is essentially impossible:
- Email spam filtering, card payments, search engines, and critical infrastructure already depend on ML.
- Letting individuals “opt out” would break systems (e.g., spammers would just opt out of spam filters).
- Others argue there should at least be choice:
- Pay more for non‑AI or low‑automation services, analogous to fees for in‑person banking or paper mail.
- The main complaint is not AI’s existence, but being forced to use it with no alternative.
Human vs AI Decisions
- Some challenge the article’s framing that human decisions are inherently preferable:
- Hiring filters and resume screeners have been automated for years; humans are biased and inconsistent too.
- AI might approximate human judgments (including their biases) at scale.
- Others worry about:
- Doctors or insurers relying on opaque systems patients cannot question.
- AI in insurance or healthcare maximizing denials and leaving no realistic recourse.
Accountability, Recourse, and Regulation
- Strong concern that AI diffuses responsibility: “the machine decided” becomes a shield.
- Counter‑argument: companies are already liable under existing doctrines (vicarious liability, regulatory agencies).
- Suggestions:
- Mandatory human appeals for high‑stakes decisions; AI should never be the final arbiter.
- Transparency via test suites (e.g., probing for racial bias) rather than reading model code.
- “Recall” faulty models across all deployments, analogous to defective physical products.
- GDPR Article 22 and recent EU/UK AI safety efforts are cited as partial frameworks, though enforcement and scale remain open questions.
Data, Training, and Privacy
- Split views on training:
- Some say “if you publish it, expect it to be read and trained on.”
- Others insist there’s a clear difference between reading and unlicensed mass reuse, especially when monetized.
- Debate over whether large‑scale training on unlicensed works is lawful (especially under UK law) and whether it undermines incentives for human creators.
Broader Cynicism
- Some see the article as personal neurosis rather than a societal problem.
- Others generalize to a wider critique: pervasive tracking, advertising, and AI‑mediated services make “going offline” the only true opt‑out—which is increasingly incompatible with normal life.