LLVM AI tool policy: human in the loop
Overall reception of the LLVM AI policy
- Majority view: policy is “obvious common sense” and necessary, especially for critical infrastructure like LLVM.
- Core idea praised: tools are fine, but the named contributor must understand, defend, and stand behind the code.
- Some are dismayed that such a basic norm even has to be written down.
Responsibility and “AI did it”
- Many report colleagues saying “Cursor/Copilot/LLM wrote that” and being unable to explain their own code.
- Strong consensus: if it’s in your PR, it’s your code; “the AI did it” is not an excuse.
- Analogy: you can’t serve a burnt sandwich and blame the toaster; your responsibility is deciding what you ship.
- One nuance: if a company mandates AI usage and cuts verification time, some argue “AI did it” shifts blame upwards; others compare this to “just following orders” and reject it.
Reviewer burden and “AI slop”
- Widespread fatigue with reviewing low-quality, AI-generated changes from people who don’t understand them (“slop”, “vibe coding”).
- This is seen as turbo-charging Dunning–Kruger: non-coders (and some coders) gain overconfidence and skip real understanding.
- OSS maintainers especially feel abused by drive-by, extractive contributions that cost them far more to review than they cost to generate.
Automated AI review tools
- LLVM bans autonomous AI review comments; some find this curious, citing genuinely useful internal AI reviewers.
- Defenders of the ban emphasize:
- LLMs are “plausibility engines” and cannot be the final arbiter.
- Human-reviewed, opt-in AI assistance is fine; autonomous agents in project spaces are not.
- Human review spreads knowledge and fosters discussion; bots can undermine that.
Open source vs corporate context
- Companies can discipline or fire repeat offenders; OSS projects have little leverage, so they need explicit policies to prevent repeated low-quality AI submissions.
- Mailing-list workflows (e.g., gcc/Linux) are cited as naturally gatekeeping: submitters must justify changes in writing, not just open PRs.
Copyright and legal concerns
- LLVM’s copyright clause resonates: contributors are responsible for ensuring LLM output doesn’t violate copyright, but verifying that is hard.
- Debate over whether short, “irreducible” algorithmic snippets can really be infringing; some insist that if you didn’t write it, you can’t be sure.
Meta and culture
- Several dislike the original HN title as hostile and misrepresenting the policy’s tone.
- Concern about “AI witch hunts” against suspected LLM-written comments; calls to leave enforcement to moderators.
- Some find “AI slop” an overused, dismissive label that can ignore context and genuine advances.