Debian decides not to decide on AI-generated contributions
Overall reaction to Debian’s “no decision”
- Many see “not deciding” as reasonable given fast-changing tech and unclear impacts.
- Others think strong anti-LLM policies are overdue, especially for critical infrastructure like distros, kernels, and compilers.
- Several argue that focusing on “AI or not” is a distraction; projects should focus on whether contributions are good, safe, and maintainable.
Licensing, copyright, and ethics
- One camp views LLMs as trained on uncompensated human work, making their outputs ethically tainted and potentially license-violating, especially for GPL/copyleft.
- Others say all creative work is derivative, IP regimes are already broken, and public‑domain‑like AI output is legally usable once properly reviewed.
- There is disagreement whether AI-generated code can be copyrighted or licensed; some point to US guidance that pure AI output is not copyrightable, raising complications for FOSS and proprietary projects alike.
- Some fear future legal or financial obligations to rightsholders whose data trained closed models.
Code quality, review burden, and spam
- Strong consensus that low-effort AI “slop” is a real problem: large, shallow PRs, hallucinated APIs, and unreadable abstractions.
- Maintainers report being flooded with low-value PRs, similar to “Hacktoberfest on steroids.”
- Critics note LLM output often looks superficially good, increasing review cost versus obviously-bad human code.
- Pro-AI commenters counter that modern models can produce high-quality, often working code when driven by skilled developers, and that bad code predates AI.
Trust, responsibility, and reputation
- Widely shared view: responsibility sits with the submitter. They must understand, defend, and maintain what they contribute, AI-assisted or not.
- Several propose stronger reputation/onboarding systems: small patches first, “DKP-like” points, limits for new contributors, or blocking large PRs from unknowns.
- Some argue “no AI” rules mostly punish honest, high-quality contributors, while bad actors will lie or churn new accounts.
Detection and enforcement
- Many see labeling or banning AI-generated code as unenforceable without intrusive surveillance or unreliable detectors.
- Others say rules still matter for intent: violating a “disclose AI use” policy becomes clear bad faith when detectable.
AI as tool vs. replacement; human value
- One side: AI is just another tool (like autocomplete, linters). What matters is human understanding and intent.
- Opposing view: viewing AI as a human replacement undermines human dignity and labor value; some tie this to broader capitalist exploitation.
- Some push back on AI “inevitability” narratives, seeing them as hype to drive adoption and layoffs.
Accessibility and positive use cases
- Multiple commenters with RSI or disabilities describe LLMs and speech+AI workflows as transformative, restoring or enhancing their ability to code and write.
- Others accept these as compelling edge cases but maintain that mass low-effort AI use still harms maintainers and code quality.
Process and tooling proposals
- Ideas include:
- AI-assisted code review as a first filter, with trust scoring and automatic feedback/triage.
- Limiting PR size or complexity for new contributors.
- Requiring discussion/spec design before non-trivial PRs.
- Explicit policies: contributors must be able to explain changes; “one-strike” ejection for unexplainable slop.
- Cost and adversarial behavior are noted as major obstacles to AI-based review at scale.