We will ban you and ridicule you in public if you waste our time on crap reports
Bug bounties, AI slop, and perverse incentives
- cURL is ending its bug bounty after being flooded with AI-generated “vulnerability” reports that are wrong, untestable, or obviously hallucinated.
- Commenters frame bug bounties as having become a lottery: throw enough AI-generated junk at high‑profile projects and eventually get paid.
- Some argue that even AI-found issues would be fine “on merit,” but maintainers stress opportunity cost: each bogus report is expensive to triage.
- Others note this isn’t new: Hacktoberfest, student contests, and bounty platforms have long produced low-effort issues and PRs; LLMs just scaled it up.
Deterrents: fees, shaming, and structural friction
- Suggested countermeasures:
- Upfront fees or refundable deposits for security reports or PRs.
- Reputation/“credit scores” for bug reporters or GitHub accounts.
- Limiting who can open issues/PRs (e.g., only maintainers or prior contributors).
- Very explicit policies that AI-generated reports are banned.
- Pushback:
- Fees would deter casual but valuable reports and are easy to abuse by dishonest maintainers.
- Reputation systems concentrate power in platforms (e.g., GitHub/Microsoft) and risk opaque “social scoring.”
- Shaming and public ridicule may not work on throwaway accounts and can chill good-faith reporters.
Impact on contributors and open-source culture
- Several maintainers describe being overwhelmed by “LLM slop”: bogus issues, trivial doc PRs, or random behavior changes justified with AI text.
- Some casual contributors now hesitate to open legitimate PRs, fearing they’ll be treated as spam.
- A recurring theme: today’s “open source” norm (public issues, free support, prompt fixes) is distinct from merely publishing source, and is increasingly unsustainable without funding.
- Tension noted between cURL’s Code of Conduct (respect for contributors) and the new “we will ridicule you” stance; some see this as necessary boundary-setting, others as toxic.
Regional and incentive-driven spam
- Multiple maintainers report waves of low-quality, often LLM-written contributions from students, especially in India, tied to:
- Resume-padding (“open source contributor,” “N CVEs”).
- Programs like GSoC, hackathons, and local bootcamps that reward any PR.
- Others caution against overgeneralizing culturally; they highlight structural factors: extreme job competition, poor education quality, and incentives to “fake it” to get noticed.
- There’s disagreement whether this is mainly cultural (“saving face,” never saying “I don’t know”) or primarily about economic and incentive pressures.
Platform and tooling responses (GitHub, maintainers)
- A GitHub product manager acknowledges the problem and mentions:
- Potential options to disable PRs, or limit them to collaborators.
- Better visibility into maintainers’ contribution expectations and patterns.
- AI-based triage agents for issues/PRs, though some dislike “fighting AI with AI.”
- Maintainers suggest:
- Starting everything as “discussions,” with only maintainers opening issues.
- Stronger filters on new accounts and abnormal PR patterns.
- Clearer contribution guidelines and “maintainer profiles” to set expectations.
Broader worries about AI-generated spam
- Many see this as the “new spam”: an asymmetry where low-cost AI output meets high-cost human review, across bug trackers, email, academia, and publishing.
- There is concern that:
- High-value projects will retreat from open contribution models.
- GitHub activity and even CVEs become nearly worthless as hiring signals post‑LLM.
- Others argue society has handled email spam with tooling and norms and expect a similar trajectory here, but acknowledge we’re in the painful early phase.