Benn Jordan's AI poison pill and the weird world of adversarial noise
Scope of “Learning” and IP Rights
- One camp argues artists should not control how others “learn” from published works; any data use, including AI training, should be allowed once something is public.
- Others see a clear distinction between humans learning and corporations training models for profit, and view unconsented AI training as a new form of exploitation.
- Several participants stress that current law protects copying, performance, and distribution, not “learning” as such, and that analogies between human and machine learning are legally weak.
Radical Anti‑IP vs Reformist Positions
- A vocal minority advocates abolishing IP entirely: no copyrights, no control over remixing or commercial reuse, and acceptance that corporations could freely profit from all published works.
- Critics counter that this primarily benefits large platforms, further weakens already precarious creators, and would gut many knowledge‑ and R&D‑heavy industries.
- A more common middle ground favors shorter copyright terms and narrower rights, while preserving some exclusivity to incentivize creation and prevent outright plagiarism or fraud.
How Should Artists Get Paid?
- Multiple comments note that royalties and streaming payments are already negligible for most musicians; many rely on touring, merch, sponsorships, patronage, or basic income‑style support.
- Some argue YouTube‑style models (free content, monetized via sponsorship and fan support) show that creators can earn without strong IP; others respond that this favors “personality” content and doesn’t scale to all art forms.
Adversarial Noise / “Poison Pills” Against AI
- Technically minded posters are skeptical that adversarial perturbations will work long‑term as a protection strategy:
- Attacks often don’t transfer well across models, architectures, and preprocessing pipelines.
- Data cleaners can add noise, denoise, filter inaudible spectrum, or resynthesize audio, stripping many perturbations.
- Once a defense is broken, all previously “protected” content becomes retroactively vulnerable.
- Some see these methods as symbolic protest or a temporary cost‑imposer on model trainers; others warn that overselling weak defenses misleads artists into a false sense of security.
Ethics, Politics, and Centralization
- There’s debate over whether opposing IP in this context aligns more with empowering creators or with the interests of large tech firms seeking cheap training data.
- Several comments frame generative AI not as “liberating creativity” but as centralizing cultural production inside expensive, proprietary models owned by a few corporations.