This was going to be my criticism. I’ll only add that your Bayesian solution will not be satisfactory to the dogmatic frequentists because it ends up being “subjective”.
Taken literally, the recommendations in that paper cited by Christopher Tong would require scientists to so severely discount any source of data that he or she did not personally collect, it might as well be treated as having no credibility. That would include meta-analysis of RCT’s, since meta-analyses of RCTs are also merely “observational”.
Berlin JA, Golub RM. Meta-analysis as Evidence: Building a Better Pyramid. JAMA. 2014;312(6):603–606 link
JAMA considers meta-analysis to represent an observational design, with measures that should be interpreted as associations rather than causal effects.
I thought Efron’s old paper “Why isn’t everyone Bayesian” settled the dispute on foundations, where commentators agreed that the subjectivists had the best arguments, but practice dictated use of other procedures that were close approximations.
Efron, B. (1986). Why isn’t everyone a Bayesian?. The American Statistician, 40(1), 1-5. link
If the frequentists are going to complain about the Bayesian prior, why should the Bayesian meekly accept that a study that claims to use randomization is inherently more credible than any other type of formal model for the data collection procedure, when other claims from that same agent are treated with doubt, if not strong skepticism?
The Bayesians in recent times (with an important exception) have been too charitable in conditioning on the reports of others that claim to use randomization. Rethinking this naive acceptance of reports that claim to use randomization can lead to an impasse – how does a community of scientists collect data and conduct experiments to decide questions of fact that leads to a convergence of opinion, which seems to be the explicit goal of scientific inquiry?
The field of mechanism design provides some answers, but none of this appears to be on the radar of statisticians. It would unify the frequentist and Bayesian perspectives on the design of experiments by providing a common language to discuss goals and methods using game theoretic concepts, which have already been used in theoretical statistics.
The key idea of mechanism design is identifying goals first and then attempting to design a system that achieves those goals. In other words, at the beginning of the process, the goals are given, and the ideal mechanism is the unknown. This contrasts with “positive” or predictive economics, which studies the actual or likely outcomes of a given system. In that case, the system is the given, and the outcomes are the unknowns.
The closest paper that comes to approaching this Frequentist/Bayesian dispute from a mechanism design point of view (without realizing it) is a paper from 2014 by Atkinson, that discusses biased coin designs, which I mentioned in this thread:
Atkinson AC. (2014). Selecting a Biased-Coin Design, Statistical Science, Statist. Sci. 29(1), 144-163 link
Biased-coin designs are used in clinical trials to allocate treatments with some randomness while maintaining approximately equal allocation. More recent rules are compared with Efron’s [Biometrika 58 (1971) 403–417] biased-coin rule and extended to allow balance over covariates. The main properties are loss of information, due to imbalance, and selection bias. Theoretical results, mostly large sample, are assembled and assessed by small-sample simulations. The properties of the rules fall into three clear categories. A Bayesian rule is shown to have appealing properties; at the cost of slight imbalance, bias is virtually eliminated for large samples.
Recommended Reading
Briggs is a bit extreme in his critique, but his fundamental point is valid, and his argument that “randomization” is a religious ritual that “blesses” a data set can explain the rise of “non-comparative randomized trials” discussed in this other thread:
The following should be studied together, as it presents a coherent way to do probabilistic bias analysis that @Sander has advised for decades, based on Bayesian Trust modelling (which is used in information security contexts):
Josang, A., Hayward, R., & Pope, S. (2006). Trust network analysis with subjective logic. In Conference Proceedings of the Twenty-Ninth Australasian Computer Science Conference (ACSW 2006) (pp. 85-94). Australian Computer Society. link
Of course this 2005 paper by @Sander_Greenland remains relevant for this thread. Note he also mentions the importance of bias analysis in the context of randomized studies.
Greenland, S. (2005). Multiple-bias modelling for analysis of observational data. Journal of the Royal Statistical Society Series A: Statistics in Society, 168(2), 267-306. PDF
This paper by Philip Stark is also worth re-examination (updated for the actual published citation):
Stark, P. B. (2022). Pay no attention to the model behind the curtain. Pure and Applied Geophysics, 179(11), 4121-4145. link