cURL removes bug bounties
Scale and Nature of the “AI Slop” Problem
- Many reports to cURL’s bounty were obviously LLM-generated: generic language, wrong project names, imaginary vulnerabilities, and copy‑pasted “chat” output.
- Reviewers found it exhausting: polite attempts to engage were met with incoherent replies, suggesting low English proficiency plus overreliance on AI.
- Some commenters note this started already in 2023, before “AI slop” became widely recognized, making it harder to detect in mixed-quality queues.
Entry Fees, Friction, and Game-Theoretic Fixes
- Several propose a submission fee refunded for valid or good‑faith reports to deter spam; likened to adding “trivial inconveniences” that dramatically reduce low‑effort behavior.
- Others warn this would:
- Deter serious but uncertain reporters and those with little money.
- Create admin, payment, and escrow complexity for maintainers.
- Incentivize companies to reject valid reports to keep the fee.
- Variants suggested: platform‑level fees (e.g. pay to join HackerOne, then rate‑limit bad actors), or tiered “double down” fees that escalate to senior reviewers.
Structural Problems with Bug Bounties
- People on the receiving side describe huge volumes of low‑quality, copy‑pasted scanner output even before AI.
- From the submitter side, common complaints: unclear scope, misinformed triagers, “works as intended” rationalizations, severity downgrades, and outright nonpayment.
- There’s disagreement on whether bounties realistically deter selling exploits to offensive buyers; some see that as mostly a myth outside high‑end zero‑day markets.
Using AI to Fight AI
- One camp suggests LLMs could pre‑triage reports: given a “presume it’s wrong and explain why” prompt, models do surprisingly well at calling out slop on sample curl reports.
- Critics respond that:
- LLM judgments are non‑deterministic and easy to steer with leading prompts.
- False negatives/positives on security reports are unacceptable without human review.
- Overtrusting AI here repeats the original problem, just on the maintainer’s side.
Open Source, Incentives, and AI
- Many see open source as uniquely harmed: its code trains models, which then:
- Generate spam issues/PRs and bogus bug reports.
- Help competitors build proprietary services that undercut FOSS‑based business models.
- Others counter that:
- FOSS licenses explicitly permit learning from code; some argue training is “fair use.”
- LLMs can meaningfully assist real contributors when used as tools, not as generators of unchecked output.
- There’s concern that AI‑driven spam erodes maintainers’ will to accept outside contributions at all.
Alternatives and Reputation-Based Controls
- Suggestions include:
- Invite‑only or private bounty programs based on platform reputation.
- GitHub‑style “strike” or community tagging systems for repeat slop submitters.
- CTF‑style “flags” for some vulnerability classes to make validity unambiguous.
- Critics note these can raise barriers for new researchers and don’t fully address non‑malicious but misguided AI‑assisted reports.