Ortega hypothesis

Citation Dynamics and Status Effects

  • Multiple comments argue that famous scientists and institutions get disproportionate citations due to status, “rich-get-richer” dynamics, and acting as easy quality signals.
  • Availability matters: well-known researchers give more talks, trigger citation alerts, and thus are top-of-mind when people write.
  • Under deadlines, authors often reuse whatever is already in their BibTeX and default to landmark or highly cited papers, even when more relevant work exists.
  • This feeds back into peer review, where recognizable names are unconsciously treated as more credible.

Limits of Citations as a Proxy for Contribution

  • Several participants say citation counts don’t reflect who actually generated ideas or influenced thinking.
  • People often remember a person or a talk, then find “some” paper by them to cite.
  • Prior or parallel work can be ignored once a “popular” paper crystallizes an idea. Retracted or derivative work can keep getting cited.
  • There are examples of techniques or measures widely misattributed because one paper became the canonical citation.

Ortega vs Newton: Complementary or Competing?

  • Many see both hypotheses as partially true: a few major breakthroughs shape fields, but require extensive incremental work by many others.
  • “Dots and connectors” framing: myriad small results create the dots; a few people connect them. Sometimes multiple “giants” independently do so once the dots exist.
  • Others argue that giants may also be wrong and can hold fields back until paradigms shift (invoking Kuhn).

Role of “Mediocre” Scientists

  • Defenses of the Ortega view emphasize data collection, routine lab work, and refinements that almost anyone competent could do but are essential for validation and replication.
  • Teaching and maintaining a living knowledge base are highlighted as crucial: without many practitioners, entire subfields or tacit know‑how can be lost.
  • Analogies include ordinary soldiers vs special forces, or large software teams where a few design core architectures but many implement and maintain.

80/20, Waste, and Risk of Bad Science

  • Some claim nature is “80/20” and that most researchers “might as well not exist.”
  • Pushback stresses: you cannot know ex ante which 20% will matter; like venture capital, many failed attempts are the cost of the few big wins.
  • Others note a downside: scaling up the number of mediocre researchers also scales fraudulent or low‑quality work, which can mislead good scientists and waste years.

Paradigms, Groupthink, and “One Funeral at a Time”

  • One line of criticism: large cliques chase fashionable hypotheses long past their usefulness, crowding out alternative ideas.
  • Examples raised include blue LEDs (large groups pursuing one material system, while the key breakthrough came from going against consensus) and Alzheimer’s amyloid‑beta research allegedly consuming vast resources with little payoff.
  • A cited study on “Planck’s principle” is used to argue that dominant figures can slow progress until they leave the field.
  • Others counter that incremental “normal science” and many small advances (e.g., materials optimization, measurement campaigns) are exactly how much real progress is made.

Testability and Metrics for the Hypothesis

  • Several commenters say the Ortega vs Newton debate is hard to make empirically sharp; current work relies too heavily on citation networks.
  • Suggestions include decomposing “scientific progress” into components (data gathering, hypothesis generation/testing, teaching, community building, fundraising, etc.) and trying to quantify contributions along these axes.
  • There is skepticism that any clean, decisive test is possible; some see the whole issue as more philosophical than scientific.

Modern Science as Team Effort

  • Multiple analogies to engineering and software: earlier eras allowed lone geniuses; modern problems require large teams, yet still hinge on a few key conceptual or architectural insights.
  • The prevailing view in the thread leans toward a layered model: landmark ideas, masses of “lunchpail” work extending and validating them, and then new landmarks built on that enlarged base.