Looks like it is happening
AI-written papers and arXiv submission spike
- The blog post observes that high-energy theory (hep-th) submissions to arXiv have roughly doubled and speculates this is due to LLM-written papers matching the (already low) bar of typical work in that subfield.
- Some commenters accept this as evidence that “it is happening” (AI slop flooding theory). Others point out a methodological flaw: counting by last modified date biases toward recent peaks; when using original submission dates the spike largely disappears.
Peer review, arXiv, and gatekeeping
- Several clarifications: arXiv is a preprint server, not a peer-reviewed journal, but it does have minimal screening and requires either affiliation or an endorser.
- Consensus: arXiv checks format and basic sanity, not correctness; once you’re “in,” there’s no real peer review.
- Some argue that peer review itself was already failing under “publish or perish,” and AI just amplifies the existing firehose problem.
Mediocrity, incentives, and the end of signal
- Many agree that a large fraction of papers were already low-value products of career incentives: quantity, citations, grants. AI just lowers the cost of producing that slop.
- Optimistic view: this may force a crisis that breaks an unsustainable system and pushes toward new evaluation metrics (beyond paper counts).
- Pessimistic view: systems can persist in “utterly broken” states; more noise may simply bury the few real contributions.
Impact on careers, outsiders, and social filtering
- One fear: early-career researchers lose the “mediocre-paper output” signal that currently helps them progress.
- Another: with AI slop flooding preprints, readers will rely even more on social filters—big names, elite institutions, and existing networks—making it harder for outsiders to be noticed even if their work is good.
- Some propose stronger rules (e.g., attestations that humans wrote the paper, bans for violations) or entirely new systems where author identity is hidden and publications are decoupled from career advancement.
AI slop beyond academia (HN, YouTube, software)
- Commenters see parallel trends on HN: rising submission and comment volume, suspected bots, and LLM-written low-effort posts.
- Similar complaints target YouTube: AI-voiced, auto-scripted videos that mimic high-effort content but are shallow and repetitive.
- Broader concern: society is heading into a “noise crisis” across domains; future value will depend on new tools for filtering, ranking, and verification.