Open source maintainers are drowning in junk bug reports written by AI

Perceived Causes of AI-Generated Junk Reports

  • Many believe it’s mostly students or job-seekers padding GitHub activity and resumes, chasing “contributor” lines, badges, and bug bounty payouts.
  • Others see it as a general product of “morons with LLMs” – low-effort users amplified by tools that produce plausible-sounding nonsense.
  • Some speculate about more malicious angles: state or organized actors running supply-chain-style attacks, or people gaming bug bounty platforms.
  • A minority note it could also be AI research or tool-tuning gone wild, but this is speculative and unclear.

Impact on Open Source and Maintainers

  • Maintainers report significant triage burden: verbose AI reports, giant “lint everything” PRs, auto-generated static-analysis issue floods.
  • Cost asymmetry: seconds to generate, hours or days to review and verify, including security risk when huge diffs must be audited.
  • Some foresee maintainers becoming less responsive, especially to anonymous or low-reputation accounts.
  • Parallel problems are reported outside OSS: courts and legal teams deluged with AI-generated filings, framed as a kind of “denial of justice”.

Continuities and What’s New

  • Participants note similar pre-LLM patterns: Markov-like spam posts, misuse of static analyzers, low-signal security reports.
  • The change now is scale, accessibility, and the increased difficulty of spotting AI content, which often mimics corporate-style, polished language.

Proposed Mitigations

  • Reputation gates: account age requirements, “programmer whuffie” ideas, and publicizing spammy accounts to hurt hiring prospects.
  • Technical measures: CAPTCHAs/3FA for issue creation, AI honeypots (obvious fake bugs to detect scanbots), explicit style guides to prevent trivial PR ping-pong.
  • Some already rely on GitHub support to quickly remove bot accounts.
  • Others suggest fighting AI with AI: LLMs for triage and auto-responses, though critics warn this could make issue trackers “entirely useless” and fuel an AI arms race.

Debate on AI Trajectory and Broader Risks

  • One side expects AI tools to become far more capable and thus more dangerous as “weapons” for spam and manipulation.
  • Skeptics argue there’s been no fundamental breakthrough beyond scale; they see LLM hype as unsustainable and output quality unlikely to improve dramatically.
  • Several worry about systemic effects: escalating energy use, everyone defending against everyone else’s AI, and human support channels becoming increasingly inaccessible.