Individual response

My journey to understand this thing called “evidence based decision making” started with a vague intuition that the norms taught to me and my colleagues were logically flawed. Since then I’ve been motivated to borrow tools from mathematical and philosophical logic to formalize this beautiful narrative description of principled and honest scientific discourse by Paul Rosenbaum in his book Observational Studies (section 1.3). Here is an excerpt, but the entire short section is worth thinking about.

I’ve learned much from Sander’s (and Frank’s) writings and posts. I credit his presentations on rebuilding statistics upon information theoretic grounds as crucial for scientific understanding.

I find it surprising that there are a large number of scholars who think practicing good science is distinct from (Bayesian) decision theoretic considerations. As a first order approximation of a rational scientific actor, an agent who attempts to maximize the information from “experiments” (defined to include observational studies) seems like a good starting point.

I acknowledge this is a minority position, but after much study, I have to disagree with the causal
inference scholars who claim probability is not a strong enough language with which to express
causal concepts. There were some interesting Twitter threads (now deleted, sadly) where Harry Crane and Nassim Taleb challenged Pearl on his position that causation is outside of standard statistical inference.

Causal inference is closely related to exchangeability, and disagreements about
study design are better discussed in terms of what factors render the groups being considered not
exchangeable.

Causal inference is just inference.[1] A community of scholars can be modeled as a group
of bettors; those who have the best models of future observations (in the sense their forecast enable
them to win more than they lose on a consistent basis). Converging to the best causal model ends
the process of betting on outcomes, unless someone finds an anomaly worth betting on, of course.

Possession of good causal models enable one to be like the gambler in JL Kelly’s paper A New Interpretation of the Information Rate.

  1. David Rohde Proceedings on “I (Still) Can’t Believe It’s Not Better!” at NeurIPS 2021 Workshops,
    PMLR 163:75-79, 2022. The supplement is just as valuable as the main paper.
    Causal Inference, is just Inference: A beautifully simple idea that not everyone accepts

Further Reading

https://www.mdpi.com/1099-4300/23/8/928/htm

Gelman’s blog had an in-depth discussion: Causal inference in AI: Expressing potential outcomes in a graphical-modeling framework that can be fit using Stan

This comment from Daniel Lakeland in that thread sums up my attitude on this elegantly:

Blockquote
I found it very frustrating to talk with Pearl regarding these issues (there was a long exchange between us on this blog about 3 or 4 years back), because I came to the conclusion just as you have that his understanding of what is probability theory and statistics is entirely frequentist … and my understanding was Bayesian… and so we talked past each other… He even acknowledged knowing about the development of the Cox/Jaynes theory of probability as extended logic, but seemed to gloss over any actual understanding of it.

Another Gelman post is here: Resolving disputes between J. Pearl and D. Rubin on causal inference

2 Likes