Death by a Thousand Slops

AI-generated “slop” in security reports

  • Many comments focus on AI-written vulnerability reports that look polished but are technically empty or fabricated.
  • Examples from curl’s HackerOne program show reports that:
    • Use generic textbook buffer overflow writeups with no real connection to curl’s code.
    • Mis-describe lines of code, hallucinate vulnerabilities, or even ship “PoC” code that doesn’t use the claimed function at all.
  • Some note that submitters often become aggressive when challenged, seemingly trying to intimidate maintainers into accepting bogus findings.

Human and organizational toll

  • Multiple people stress the mental load on maintainers: endless low-quality reports, gaslighting-style interactions, and time lost that can’t be recovered.
  • This is compared to broader patterns: AI-suggested nonsense in code review, research peer review, and management decisions, all consuming human time to filter.
  • There’s appreciation for curl’s patient, good-faith handling of reports—but concern that this patience is being exploited.

AI vs human “careerist” slop

  • Commenters highlight that AI slop is only part of the problem; a larger share is from humans:
    • Juniors chasing résumé bullet points or “open source contribution” checkboxes.
    • Security people incentivized to “find something” rather than to ensure findings are valid or help fix them.
  • Some see AI as an accelerant for an already bad bug bounty culture (“spray and pray” reports, tool-driven pentesting).

Proposed mitigations (and trade-offs)

  • Ideas raised:
    • Fees or refundable deposits for submissions; many see this as hostile to open source and logistically hard, but it could deter mass spam.
    • Reputation / invite-only or private bounty programs; whitelist-based or vouching systems; “minimum reputation to submit.”
    • Requiring reproducible test cases or exploit code.
    • AI triage: use models to detect contradictions, hallucinations, or to attempt exploit generation before humans look.
  • Skepticism remains: determined abusers can adapt, and raising barriers may exclude legitimate but new contributors.

Broader “slopification” concerns

  • Parallels are drawn to email spam and SEO sludge: AI makes it cheaper to flood channels, degrading trust and discoverability.
  • A long subthread debates AI-generated art: some only value works with evident human effort and feel forced into over-filtering, even at the cost of missing genuine creators.
  • Several fear a general trend toward closed source, paywalls, and higher friction as a defensive response to pervasive slop.