Define policy forbidding use of AI code generators
Scope and Strictness of the Policy
- QEMU’s new rule explicitly bans any code “written by, or derived from” AI code generators, not just obvious bulk generations.
- Several commenters note this is stricter than LLVM’s stance and disallows even “I had Claude draft it but I fully understand it.”
- Some interpret room for using AI for ideas, API reminders, or docs, as long as the contributed code itself is human‑written; others stress the text does not say that.
Primary Motivations: Legal Risk vs. Slop Avoidance
- Maintainers cite unsettled law around copyright, training on mixed‑license corpora, and GPLv2 compatibility; rollback risk if AI code later turns out infringing is seen as huge.
- Others suspect the deeper motive is practical: projects are already being hit by low‑quality AI PRs and AI‑written bug reports, which are costly to triage and reject.
- Analogies are made to policies against submitting unlicensed or reverse‑engineered proprietary code: hard to enforce perfectly but necessary as a norm and liability shield.
Quality, Review Burden, and “Cognitive DDoS”
- Many maintainers report AI‑generated patches and code reviews that “look competent” but are subtly wrong, requiring far more reviewer time than the author spent.
- Anecdotes: LLMs confidently “fixing” non‑bugs, masking root causes, hallucinating APIs, and generating insecure code unless explicitly steered.
- Concern that managers and mediocre developers use LLM output as an authority against domain experts, creating a “bullshit asymmetry” and morale damage.
Open Source Ecosystem and Licensing Implications
- Discussion that OSS is especially exposed if AI output is later judged either infringing (forcing mass rewrites) or public domain (weakening copyleft leverage).
- Some argue copyleft itself relies on copyright and that mass unlicensed scraping undermines the social contract that motivated many FOSS contributors.
- Others counter that future AI‑driven projects will outpace “human‑only” ones, and that strict bans may lead to forks or competing projects that embrace AI.
Tooling Nuance and Enforceability
- Distinction drawn between: full codegen, agentic refactors, autocomplete‑style hints, and using AI for tests/CI/docs; experiences are mixed on where it’s genuinely helpful.
- Many note the policy is practically unenforceable at fine granularity; its main effect is to set expectations, deter blatant AI slop, and shift legal responsibility via DCO.
- QEMU’s “start strict and safe, then relax” approach is widely seen as conservative but reasonable for a critical low‑level project.