How to interpret “confidence intervals” in observational studies

While I strongly agree with @f2harrell in his support for subjective Bayesian foundations and acknowledge that a Bayesian may even have a valid reason to employ frequentist procedures such as randomization, I suspect I’m even more radical.

My complaint is that Frequentist statistical practice engages in cognitive misdirection by deluding readers into believing they know the data generation mechanism, when the only one who can truly know it, is the experimenter.

The rest of us place trust in the honesty of the report, but frequentist aversion to epistemic probability does not permit modelling that. So we end up with everyone de facto using “spike” priors on parts of the model of the data generation process, and having false confidence in the results. A formal model of false confidence (which both Bayesian and Frequentist methods can fall into) is discussed in detail in this paper from the Royal Society:

Balch, M. S., Martin, R., & Ferson, S. (2019). Satellite conjunction analysis and the false confidence theorem. Proceedings of the Royal Society A, 475(2227), 20180565. https://royalsocietypublishing.org/doi/full/10.1098/rspa.2018.0565

From a computational perspective that is used in cryptography, this tacit trust placed in reports of randomization procedures done outside of your own control, places the scientific community at risk of being mislead by a rogue scientist, which there are a number of case studies I’m currently collecting.

The Bayesians since Gossett were correct in calling attention to the major limitations of randomization, appears to have all but been forgotten.

Related thread:

1 Like