A flawed paper in management science has been cited more than 6k times

Replication Failures, Misconduct, and Career Risk

  • Multiple commenters describe failed attempts to reproduce highly cited work (biotech, sensors, management, CS), sometimes concluding the original data were faked or heavily massaged.
  • Junior researchers who uncover problems often face stonewalling by authors, non-response from journals, and silence from institutions; trying to expose misconduct is seen as career suicide.
  • Common coping strategy: abandon the topic, switch labs, or leave academia for industry. Replication is treated as low-status, unrewarded work.

Citations, Metrics, and Gaming the System

  • High citation counts are widely seen as decoupled from quality; people copy references without reading, bad or refuted work keeps getting cited, and citation rings and inflated author lists are reported.
  • Proposed fixes include:
    • Overlay “trust” or “taint” labels on the citation graph based on known problems and how papers cite flawed work.
    • Redefine h-index to require replications, or add tiers (data disclosed, replicated, etc.).
  • Others argue any such metric will itself be gamed and further entrench conservatism and reputation-protection.

Journals, Retractions, and Institutional Incentives

  • Retractions are debated: some say they should be reserved for clear misconduct; others argue failing to correct known, influential errors is itself harmful.
  • Editors and universities are portrayed as highly reluctant to retract or even publish critical comments, especially when reputations, elite institutions, or hot policy topics (e.g., sustainability/ESG) are involved.
  • Publish-or-perish, prestige journals, and grant incentives are repeatedly cited as root causes.

Management/Social Science and “Scientism”

  • Many express deep skepticism toward “management science” and parts of psychology, business, nutrition, and medicine, seeing them as especially prone to non-replicable or over-optimistic claims.
  • Some argue that much of contemporary “science” functions more like legitimizing rhetoric for elites (“The Science says…”) than like a robust error-correcting system.

Ethics: Bad People vs Bad Systems

  • Large subthread debates whether authors of flawed or fraudulent work are “bad people” or normal people responding to perverse incentives.
  • One side stresses systemic fixes, blameless postmortems, and avoiding villain-labeling; the other insists that absence of real personal consequences enables ongoing fraud and erodes public trust.

Proposed Reforms

  • Preregistration; mandatory sharing of data/code; explicit publication of replication attempts; visible links from original papers to critiques; rewarding debunking; and more openness (e.g., PubPeer-style commentary) are all suggested.
  • Some pessimists argue that if a field is mostly bogus, the only rational move is to disengage rather than search for rare “diamonds in the rough.”