Does X cause Y? An in-depth evidence review (2021)
Limits and Fragility of Causal Claims
- Many comments stress how hard real-world causality is: unmeasured confounders and colliders can make any observed X→Y relationship illusory.
- Even seemingly strong criteria (perfect correlation, temporal precedence, no obvious Z) are seen as practically unattainable because “ruling out all Z” is almost impossible.
- Complex, interacting systems (rocks and fluids, software, macroeconomies) make simple “increase X → increase Y” stories unreliable.
DAGs, Confounders, and Colliders
- Directed Acyclic Graphs (DAGs) are repeatedly cited as central tools: they clarify what must be measured, what must not be conditioned on, and where collider bias can arise.
- Clear definitions of confounders vs colliders are provided, with emphasis that confounders should be controlled, whereas conditioning on colliders introduces spurious associations.
- Some note the challenge that even DAGs assume a clean division of the world into variables, which may be philosophically or practically questionable.
Observational vs Experimental Evidence
- Several argue the article is too dismissive of observational studies, especially in nutrition and epidemiology, where RCTs are often impossible or unethical.
- Examples: smoking and lung cancer, nutrition cohorts, and Bradford Hill criteria as useful for public-health-level causal judgments.
- Others emphasize the perils: p-hacking, non-replication, cargo-cult “X linked to Y” psychology papers, and headlines built on weak or mis-specified studies.
Methods, Math, and Causal Inference Advances
- Debate over regression, “controlling for Z,” and advanced methods (GMM, IP weighting, Mendelian randomization, modern causal inference with graphs and ML).
- Some criticize the article’s skepticism as Dunning–Kruger-ish and out of touch with recent causal-inference advances; others defend lay skepticism when math is opaque and assumptions unclear.
- Frequentist vs Bayesian is seen as mostly orthogonal to causality: a wrong DAG or model stays wrong under either paradigm.
Philosophy, Incentives, and Communication
- A minority adopt near-nihilist stances (“no causality at all”), countered by intervention-based views where causality requires a notion of “outside” intervention.
- Commenters highlight misaligned incentives, media spin, industry or political motives, and the public’s overreliance on headlines.
- Anecdotes (car wires, placebos, fighting games) illustrate how intuitive causal stories can be wrong without deeper models and systematic experiments.