A standard protocol to handle and discard low-effort, AI-Generated pull requests

Overall reaction to the “protocol”

  • Many find the spec hilarious, cathartic, and appropriate for dealing with “AI slop.”
  • Others think it becomes too snarky and hostile, missing an opportunity for a serious reusable template.
  • Some feel it straw-mans the issue with overly specific examples.

Nature and impact of AI‑generated PRs

  • Maintainers report a rising wave of plausible-looking but useless or incorrect PRs generated by LLMs.
  • These often:
    • Don’t actually fix bugs or add real value.
    • Hallucinate libraries or features.
    • Include bloated essays for trivial changes.
  • Biggest pain: cost asymmetry. A 30-second AI PR can impose 30+ minutes of review effort.

Ethics, effort, and responsibility

  • Strong view: using AI isn’t the problem; outsourcing understanding is.
  • Suggested norm: if you can’t explain what your change does and how it fits the system (without AI), don’t submit it.
  • Several argue for “gatekeeping by effort”: reviewers should prioritize contributors who clearly invested real thinking.
  • Some advocate patience with well-meaning newcomers who don’t realize the harm; others report never seeing visible shame or learning.

Proposed mitigation strategies

  • Hardline options:
    • Close with a stock response; block repeat offenders.
    • Disable public PRs entirely or restrict to collaborators.
  • Process/policy options:
    • Explicit AI policies (e.g., “must be able to explain your change”).
    • Treat vague, low-effort PR descriptions as an auto-close signal.
    • Ask AI users to file issues with prompts or high-level plans rather than code diffs.
  • Economic/proof ideas:
    • Proof-of-work or “bonds” (e.g., refundable deposits, fines for AI slop) were floated but seen as hard to design and potentially wasteful.
    • Suggestions for GPG-signed commits and verifiable CI artifacts; others push back on added complexity and energy cost.

Alternatives and experiments

  • Some propose limiting agents to writing tests/specs, with humans doing implementation; critics note AI can also generate bad tests and that trust just moves to a different layer.
  • A few argue that open PRs/issues from the public may be unsustainable in an “infinite slop” world, predicting a shift toward forks, restricted contribution channels, or trust/rate-limiting models.