Guess I'm a rationalist now

What “rationalist” means in this thread

  • Distinct from historical philosophical rationalism: here it means the LessWrong / Yudkowsky project of “rationality” – teaching people to reason better, more empirically and probabilistically.
  • Emphasis on Bayesian reasoning, calibration, explicit probabilities, “epistemic status” labels, and trying to be “less wrong” rather than certainly right.
  • Critics say Bayesian talk (“priors”, “updating”) often becomes mystified jargon or a veneer over ordinary guesses, and that many adherents don’t grasp statistics as well as they think.

Elitism, labels, and cult/religion comparisons

  • Many comments see strong Randian / objectivist vibes: belief in “the right minds” solving everything, hero worship, groupthink, self-congratulation about being unusually correct.
  • The label “rationalist” is attacked as implying others are irrational; some argue even “rationality” as a movement name overclaims.
  • Multiple posters describe the scene as proto‑religion or outright cult‑like: charismatic leaders, apocalyptic AI focus, insider jargon, communal houses, “we are the chosen who see clearly” dynamics, sexual misconduct allegations, and at least one genuine spin‑off cult (Zizians).
  • Defenders say the community is unusually explicit about uncertainty, keeps “things I was wrong about” lists, and that critics are ignoring this or reading it as mere pose.

IQ, race, and scientific standards

  • A long subthread argues that many rationalists and adjacent blogs (e.g. ACX) are too friendly to “human biodiversity” / race‑IQ claims and flawed work like Lynn’s global IQ data.
  • Critics say this reveals motivated reasoning, poor statistical literacy, and willingness to dignify racist pseudoscience as “still debated”.
  • Others counter that genetic group differences are real in many traits, that it’s dogmatic to rule out any group IQ differences a priori, and that being disturbed by an idea isn’t a refutation.
  • There is meta‑critique that rationalists often cherry‑pick papers and can be impressed by anything with numbers, even when whole fields (psychometrics, some social science) are methodologically shaky.

AI risk, doomerism, and priorities

  • One major axis: are rationalists right to prioritize existential AI risk?
    • Skeptics: focus on superintelligent doom is overconfident, distracts from mundane but real harms (bias, surveillance, wealth concentration), and dovetails with corporate marketing and power‑grab narratives.
    • Supporters: if there is even single‑digit probability of extinction‑level AI failure, precautionary principles and expected‑value arguments justify extreme concern; they liken this to nuclear risk or climate.
  • Some accuse rationalists/EA of “longtermism” that morally privileges hypothetical vast future populations over present suffering, enabling ends‑justify‑means thinking (e.g. SBF narratives, “win now so future trillions are saved”).

First principles, reductionism, and wisdom

  • Many commenters say the movement is too in love with reasoning from first principles and underestimates evolved complexity of biology, society, and culture.
  • Reductionism is defended as practically fruitful in many sciences, but critics stress emergent phenomena, irreducible complexity, and the danger of ignoring history and on‑the‑ground knowledge (“touch grass”).
  • Several contrast “rationality” with older notions of “wisdom”, arguing that clever argument chains can justify antisocial or inhuman conclusions if not tempered by context and moral intuition.

EA, politics, and real-world impact

  • Effective Altruism, tightly intertwined with rationalism, is heavily debated.
    • Critics: EA and rationalism channel elite energy into technocratic, depoliticized fixes (nets, shrimp welfare, AI safety) while ignoring structural issues, labor, and capitalism; “earning to give” rationalizes working in harmful industries.
    • Defenders: EA has directed large sums to global health (malaria, vitamin A, vaccines), is not monolithic, and impact assessment is a real upgrade over feel‑good charity.
  • Some note that rationalists present themselves as above politics yet often converge on center‑liberal or techno‑libertarian views, with worrying overlaps to neoreaction and billionaire agendas in some cases.

Community dynamics and reception

  • Several ex‑insiders describe early attraction (blog quality, intellectual excitement) followed by disillusionment with groupthink, contrarianism for its own sake, and harsh treatment of external criticism.
  • Others report positive experiences at events (Lightcone/Lighthaven, Manifest) but also moments of off‑putting arrogance (“we’re more right than these other attendees”).
  • There is meta‑reflection that HN’s hostility to rationalists mirrors outgroup dynamics: rationalists are close enough to the HN demographic that their overconfidence and self‑branding trigger especially strong annoyance.