Gentoo AI Policy

Context and timing

  • Policy is from April 2024; some argue it predates a “step change” in coding agents (Claude Code, o1/o3, newer GPT/Claude models) and would look outdated soon.
  • Others push back that “AI for coding just improved again” is said every month, and that step-changes don’t automatically invalidate a cautious stance.

Ethical, copyright, and environmental concerns

  • Gentoo cites copyright-violating training data, high energy/water use, labor impacts, and spam/scam enablement.
  • Several commenters say these issues are overgeneralized or selectively applied: email, video streaming, flights, and automation software also have large footprints or harm potential.
  • There is debate over whether training on copyrighted data is fair use; some point to recent US rulings and settlements but note global law and acquisition methods (e.g. torrents) remain contentious.
  • Some see the policy as ideologically motivated; others respond that FOSS itself is ideological and ethics-based reasoning is legitimate.

Code quality, review burden, and project health

  • Gentoo’s quality concern resonates strongly: LLMs produce plausible but wrong code, increasing reviewer workload and risking subtle bugs.
  • Example from LLVM: a large AI-assisted PR with >100 review comments is described as both excellent personal learning and a significant burden on reviewers.
  • Maintainers worry about being flooded with “AI slop” PRs by inexperienced contributors or resume-builders, effectively a soft DDoS, citing curl’s experience with AI-generated bug reports.
  • Some argue LLMs surface preexisting governance weaknesses (poor controls on large, low-quality submissions) rather than create new ones.

Policy scope, consistency, and enforcement

  • Critics call the policy poorly scoped: “AI” is undefined (does it include autocomplete, translation, small models?), and many stated harms also apply to non-AI tools.
  • Others reply that in a volunteer project you can simply reject contributors who rule‑lawyer the edge cases; the policy mainly empowers maintainers to close low-effort LLM PRs.
  • Enforcement is acknowledged as mostly honor-system: well-reviewed AI-assisted code is indistinguishable; the policy targets obvious, low-effort use.
  • Some fear a chilling effect on legitimate contributors or see the stance as anti‑innovation; others see it as prudent risk management for a critical, long‑lived distro.