LLM policy?
Maintainers’ Experiences with LLM “Slop”
- Several maintainers report a noticeable rise in LLM‑generated issues and PRs: verbose, confident, often wrong, and time‑consuming to verify.
- Examples include fabricated or exaggerated bugs that “gaslight” paranoid maintainers into re‑auditing correct code, giant refactors on dormant projects, and spammy auto‑generated internal bug reports from corporate “use AI” mandates.
- Others say they haven’t seen obvious LLM content yet, suggesting the worst of it targets high‑profile, buzzwordy projects.
- There are also politically motivated edits masked as neutral technical changes (e.g., country naming), sometimes caught by AI code review tools.
Proposed Project Policies and Triage Tactics
- Suggestions range from blocking users at the GitHub level to adopting “hard‑no” policies: close suspected‑AI issues without investigation and require strong proof to reopen.
- A common theme: raise the bar for all contributions. One maintainer probes unfamiliar contributors with follow‑up questions; if they can’t discuss the code intelligently, the PR/issue is deprioritized or dropped.
- A “middle ground” proposal: require explicit disclosure of AI use in issue/PR templates plus a description of validation done; dishonesty about this could be grounds for sanctions.
- Others advocate AGENTS.md‑style guidance for bots, but some maintainers resist writing extra docs for tools they don’t use.
- Some projects simply ban AI‑generated contributions; others take a permissive stance. There’s concern that nuanced policies create enforcement overhead and adversarial dynamics.
Trust, Community Culture, and Social Effects
- Many worry LLM abuse will turn open source from a high‑trust to low‑trust environment, similar to “Eternal September.”
- There’s concern that fear of AI accusations will push students/devs to deliberately write worse code or prose to “look human.”
- Broader discussion covers misinformation volume, declining trust in evidence (photos, video), and whether people are actually becoming less gullible or just shifting which scams they fall for.
Debate on Utility vs Harm of LLM-Assisted Coding
- Some maintainers say they don’t care how code was written if the contributor understands it, tested it, and is honest about AI use; bad code is disrespectful regardless of provenance.
- Others find LLM‑generated code disproportionately subtle, wrong, and harder to review, and see mentorship as wasted when the human is just proxying prompts.
- A few individuals say LLMs finally let them ship working systems despite long‑standing difficulty with “bottom‑up” coding; critics respond that current models still often fail basic quality bars.
Legal and Platform Concerns
- Multiple commenters flag copyright and DCO issues: it’s unclear who owns LLM output and whether it’s tainted by training data. Some maintainers treat accepting AI code as a legal risk, especially for closed‑source.
- GitHub’s strong Copilot integration is seen as amplifying the problem; some predict a shift to alternative forges with stricter AI policies and moderation.