Chezmoi introduces ban on LLM-generated contributions
Policy change and scope
- Thread clarifies that the current policy is a blanket ban: any contribution containing LLM‑generated content leads to immediate ban, without recourse.
- Earlier, more permissive language about “unreviewed” LLM content was removed; several commenters initially misread the diff and confused old vs new text.
- Some interpret “any LLM use” narrowly (only generated content), others more broadly (even using Copilot/tab‑complete or LLMs for review could technically violate it).
Enforcement and ambiguity
- Many doubt enforceability: it’s impossible to prove no LLM was used, and AI detectors are unreliable.
- Others say enforcement will be social: if maintainers think something “looks like” LLM output, they’ll reject it and ban the contributor.
- Concern is raised over false positives and no‑recourse bans for humans who just wrote bad or unfamiliar code.
Maintainer motivations and experience
- Commenters assume the maintainer is reacting to floods of low‑effort, incorrect “slop” PRs and even bogus vulnerability reports obviously produced by LLMs.
- The linked discussion shows frustration: past attempts at “LLM allowed if carefully reviewed and declared” were ignored, leading to the hard ban.
Community impact and fairness
- Some see the “immediately banned without recourse” language as hostile and off‑putting; they say they wouldn’t contribute under such a policy.
- Others argue that’s the point: the project prefers fewer contributors over spending time triaging AI‑generated junk.
- One view: the rule is mainly a cudgel to quickly eject net‑negative contributors, not a literal anti‑Copilot witch hunt for good PRs.
Alternative approaches suggested
- Proposals include:
- Ban only “unreviewed” or “low‑quality” LLM contributions.
- Require disclosure of LLM use and prompts.
- Provide project‑specific LLM contribution guidelines.
- Supporters of the ban counter that debating “quality” is more contentious and time‑consuming than a bright‑line no‑LLM rule.
Legal and copyright considerations
- Several comments raise unresolved questions about whether AI‑generated code is copyrightable and whether it risks “public domain contamination” of projects.
- Others summarize recent US copyright guidance: pure AI output isn’t protected; human‑modified output might be, depending on the degree of human authorship.
- A few speculate that a clear no‑LLM policy might be a defensive move against future legal uncertainty.