Please Commit More Blatant Academic Fraud (2021)
Value of “wrong” or marginal papers
- Some argue imperfect work can still be useful: it clarifies edge cases, motivates others, or serves as a discrete “unit” of knowledge even if never directly extended.
- Others counter that knowingly publishing incorrect or insubstantial ideas pollutes the literature and wastes others’ time, especially when framed as “promising first steps.”
Perverse incentives & publish‑or‑perish
- Many describe strong pressure to publish, hit quotas, or secure funding, leading to overselling, salami-slicing, and pushing papers they know are weak.
- Co-authorship on low-value or even pseudoscientific work is reported as common, often driven by supervisors or institutional metrics rather than genuine contribution.
- Several people say refusing to play these games hurt their publication records and careers.
Anecdotes of misconduct and low standards
- Stories include: blatant implementation bugs that made it into papers, plagiarized work that still nearly passed review, and “novel” components that add no value but yield a paper due to reputation.
- Some describe departments where the implicit game is to push barely-sound or unsound work until tenure, wrapped in plausible deniability.
Field-specific concerns
- Social sciences and certain subfields (e.g., parts of psychology, behavioral economics, evolutionary psychology) are repeatedly accused of weak methods, p-hacking, biased experiments, and narrative-driven “conclusions.”
- Others push back, noting huge, verifiable datasets in social sciences and arguing that poor statistics and incentives, not the entire disciplines, are the main problem.
- Physics and engineering are seen as somewhat more self-correcting when results must work in real-world products, though theory-only subfields are flagged as also vulnerable.
Peer review, conferences, and benchmarking
- Double-blind review is described as leaky in practice; conflicts of interest, reviewer–author overlap, and even collusion rings are said to be common in large CS conferences.
- Benchmark “crimes” and superficial statistics (single-run benchmarks, no variance, cherry-picked baselines) are highlighted as both academic and industry problems.
- Some defend conferences as venues for discussion of imperfect work; others insist archival publications should represent completed, carefully vetted results.
Trust, policy, and reform ideas
- Several commenters now treat most papers as “guilty until proven innocent,” especially after failed replications.
- There is concern that low-quality or fraudulent work informs public policy.
- Proposed fixes include: funding and prestige for replication, harsher consequences for fraud, better governance of review, digital signatures for accountability, and shifting incentives away from sheer publication counts.
- Others caution against overreaction and argue that, despite flaws, “heads of steam” generally build around real, replicable advances.