@ChristopherTong , I am not going to push back on what you have said as methodologists have the right to decide what they accept or reject. However methodologists tend to focus on whether observational studies can produce “true” or “unbiased” estimates, or whether probability theory is rigorously justified in messy data.
My concern is not this. Medicine and public health rarely operate in a vacuum of certainty. Decisions must be made in the presence of imperfect evidence, where waiting for ideal conditions would be actively harmful.
- The real question is utility: Does the method help us make better decisions?
- Probability math, causal inference, and approaches like target trial emulation are tools to reduce error, quantify uncertainty, and guide action, even if they do not produce “perfectly unbiased” truth.
- Focusing on theoretical purity — “We can’t trust observational data because it’s not a randomized trial” — is neither pragmatic nor useful. It leaves clinicians and policymakers paralyzed or forced to rely on anecdote.
John Snow didn’t use probability theory, but he made a decision that saved lives, not because he achieved statistical purity but because he acted on the best available evidence in a structured, thoughtful way. Similarly, modern methods allow us to extract decision-worthy insights from observational data, even when uncertainty remains.
The key point then is the utility for decision makers. Deciding whether a method is “perfect” is irrelevant if it leads to worse outcomes. The proper standard is “Does this improve decision-making?”, not “Does this satisfy armchair notions of statistical idealism?”, a standard that need not bother methodologists but should bother decision makers in Medicine.