OpenAI fails to deliver opt-out system for photographers
Opt-out system & consent
- Many see OpenAI’s undelivered “Media Manager” / opt-out as evidence they don’t genuinely want data excluded, especially since photographers must submit each work with detailed descriptions.
- Commenters argue the burden is absurd at scale: creators would have to track and use opt-out mechanisms for many AI firms.
- Several say consent should be opt‑in, not opt‑out: OpenAI should ask before using works, as most others must.
- Tech companies’ approach to consent is criticized as showing disregard or even contempt, with analogies to invasive or predatory behavior.
- Some note precedents like Google’s
_NOMAPWi‑Fi suffix as similarly lopsided “opt-out” schemes.
Copyright, fair use, and training data
- One side claims training on scraped content is clearly fair use: models create non-expressive abstractions, are transformative, and don’t “copy” in the copyright sense.
- Others argue it should be infringement, especially as models begin to substitute for the market of original works and occasionally regurgitate them.
- Multiple people stress the law is unsettled, with many lawsuits pending; any “it’s clearly X” position is disputed.
- There is debate over analogies to humans learning from books or art:
- Pro-AI side: learning isn’t infringement; output is only a problem if it reproduces protected expression.
- Critical side: scale, automation, and corporate profit make this fundamentally different.
Artists’ livelihoods, styles, and compensation
- Some argue artists should be able to exclude their work and even force retraining of models that used it without consent.
- Others note that style is generally not protected, and that artists have always learned by copying others.
- Counterpoint: machines can replicate a style in days and produce near‑infinite derivatives, creating an uneven playing field and disincentivizing innovation.
- Suggested remedies include mandatory compensation schemes for training use, akin to music royalties, and updated licenses for code and writing.
Legal / policy expectations
- Several expect courts or legislatures to eventually clamp down, especially under pressure from large rights‑holders (e.g., media companies).
- Others think powerful AI firms will win favorable rules (e.g., training classified as fair use), especially if framed as essential for innovation or AGI.
Double standards & platform behavior
- Commenters highlight a perceived two‑tier system: everything online is fair game for training, but model weights and AI outputs are aggressively protected.
- Policies forbidding training on AI outputs are seen as hypocritical when those models were trained on uncredited human work.
OpenAI, AGI, and trust
- Strong distrust toward OpenAI is common: accusations of broken promises, bait‑and‑switch from “open” non‑profit roots, and prioritizing profit over creators.
- Some frame the work as so important (potential AGI, “benefit of humanity”) that copyright concerns are treated as secondary.
- Several express skepticism that current LLMs can reach AGI, noting hallucinations, lack of true understanding, and mostly incremental scaling rather than paradigm shifts.