You did this with an AI and you do not understand what you're doing here

AI-Generated Security Reports (“Slop”)

  • Many comments see the HackerOne curl report as emblematic of a new wave of LLM-generated “security research”: long, confident, but technically empty reports.
  • People note telltale markers: over-politeness, flawless but generic prose, emojis, em‑dashes, verbose “enterprise-style” text, and bogus PoCs that don’t exercise the target code at all.
  • There’s concern that some submissions may be fully automated “agents” churning out reports for clout, CV lines, or bug bounties, with little or no human oversight.

Burden on Maintainers and OSS

  • Maintaining projects like curl is described as “absolutely exhausting” under a flood of AI slop, especially for security reports that must be taken seriously.
  • This is framed as a new kind of DoS: not against servers, but against human attention and goodwill; risks include burnout of volunteer maintainers and erosion of trust in bug reports.
  • Some argue the public “shaming” of such reports is a necessary deterrent and educational service.

LLMs and Real Security Work

  • Practitioners report that current models are not reliable 0‑day finders; fuzzers and traditional tools remain far more effective.
  • AI can help with targeted tasks (e.g., OCR cleanup, summarizing, some code navigation), but claims of “0‑day oracles” are viewed as hype.
  • There is worry about attackers eventually using tuned models at scale, but others say we’re not there yet.

Responsibility and Human-in-the-Loop

  • Several comments argue that if you submit AI-generated output (PRs, reports, essays), you own it and must verify it; forwarding raw LLM output is called lazy and unethical.
  • Others note humans are poor at acting only as “sanity-checkers” for automation under time pressure; responsibility tends to devolve onto the weakest link.

Mitigation Ideas

  • Suggestions include: charging a small fee or deposit per report (refunded for valid ones), rate-limiting early accounts, greylisting emoji-heavy or obviously AI-styled text, or banning unreviewed AI/agent contributions.
  • Critics of paywalls worry this will also deter good-faith, low-income or casual researchers and reduce valuable findings.

Broader AI Overuse and Social Effects

  • Similar AI slop is reported in GitHub PRs, customer support tickets, resumes, classroom assignments, online courses, and forums.
  • This leads to longer, lower-signal communication (AI-written expanded messages then AI-generated summaries), described as the “opposite of data compression.”
  • Educators and employers see students and juniors outsourcing thinking to AI, with concerns about skill atrophy, misaligned incentives, and a cultural shift toward superficial “output” over understanding.

Trust, Detection, and Future Risks

  • People worry that as AI-generated text becomes subtler, distinguishing human vs AI content will get harder, encouraging paranoia and more closed or identity-verified communities.
  • There’s speculation that spammy reports might also be probing processes: mapping response times and review rigor as a prelude to more serious attacks.