AI Ethics is being narrowed on purpose, like privacy was
Data scraping, IP, and creator rights
- Strong disagreement over whether mass web-scraping for training is ethical or “just fair use.”
- Some argue legal fights will only create data brokers; even if fees are paid, jobs still vanish.
- Others say being able to exclude one’s work from training preserves a unique style and livelihood.
- Counterargument: individual “styles” aren’t legally protected, are easy to imitate, and trying to regulate style would create absurd lawsuits and favor big corporations.
- Several comments highlight how current practices disproportionately “rip the small fish to feed the big fish,” and question licenses explicitly forbidding derivatives being ignored.
- A minority call for IP abolition entirely; others say that’s unrealistic and would never be accepted by large rights-holders.
Ethics vs safety and alignment framing
- Discussion distinguishes “ethics” (power, racism, labor, governance) from “safety” (preventing model-caused harm, often in speculative AGI scenarios).
- Some see “AI safety” as a corporate rebranding that sidelines work on real-world harms (bias, surveillance, discrimination) in favor of sci‑fi “God AI” narratives that attract funding and deflect regulation.
- Others defend focus on novel risks from powerful models and reject claims that existing human-centered frameworks suffice.
Asimov’s laws, alignment, and constraints
- Long debate on Asimov’s Three Laws: many insist they were deliberately constructed as flawed plot devices, not a serious ethics framework.
- Others say even as fiction they usefully highlight that multiple stakeholders exist (creators, owners, society) and that hard constraints might still be preferable to today’s vague “alignment.”
- Several point out that modern LLMs are trained via data, not hand-coded rules; they behave more like partially socialized children than logical robots, making simple rule-sets inapplicable and hard to enforce.
Role and legitimacy of “ethicists”
- Some view many AI ethics/safety people as non-technical PR actors obsessed with “governance structures,” not practical mitigations or benchmarks.
- Others counter that there is a substantial technically competent safety community and that dismissing all ethicists ignores serious work on neural circuits, bias, and system-level risks.
Whose ethics and value pluralism
- Commenters repeatedly ask “whose ethics?” and worry about hidden ideological or religious agendas embedded in guardrails.
- One camp favors heavily constrained, non-agentic tools; another prefers uncensored local models and rejects corporate “moralizing” overrides.
Concrete harms vs speculative futures
- Many emphasize present-day issues: deepfakes, voice cloning, content moderation bias, job loss, data extraction, and even AI scrapers DDoS‑ing small sites.
- Others worry that focusing only on near-term harms or only on sci‑fi scenarios both misallocate attention; systemic incentives under capitalism are seen as driving all forms of misuse.