I think it is more useful to understand the relation between frequentist and Bayesian perspectives, so that when we are inevitably given a frequentist estimate or test result, we will interpret it appropriately.
The math to justify this isn’t terribly complicated; much can be done with algebra and\or calculus.
To that end, I’ve found the following helpful.
- NISS Webinar on p-values with James Berger, Sander Greenland, and Robert Matthews. Berger describes the relation between Bayesian updating and frequentist error probabilities. Matthews reverses the typical Bayesian method to derive a credible or skeptical prior to compare and contrast observed results with prior information. This makes it clear that a Bayesian prior can be re-interpreted as a frequentist shrinkage device.
- A published version of the procedure described by Berger in the video is here:
https://www.sciencedirect.com/science/article/pii/S002224961600002X
- Robert Matthews (and more recently with Leonhard Held) have elaborated on the Bayesian Analysis of Credibility in a number of papers. I mentioned 2 in this thread.
- After study of the papers above, I think there is value in considering the arguments of Michael Evans, who makes explicit a Bayesian derived approach known as Relative Belief. Compare his formulas with those in Berger’s Bayesian Rejection Ratio paper and presentation.
https://www.sciencedirect.com/science/article/pii/S2001037015000549
As a matter of philosophy of statistics, I think there is a lot to agree with in Evan’s paper. But in practice, a sound philosophy won’t help you compute a plausible solution to a concrete problem. A good frequentist estimate can often be used in place of a Bayesian Posterior when it is too difficult to compute.
The math needed to justify that is complex, but at the frontiers of statistical theory, a lot of interesting results are coming from the perspective that examines the relationships among Bayesian, Frequentist, and Fiducial procedures. [See the old paper by Hjort, added below] This research program has a cute acronym: Bayes, Frequentist, Fiducial (BFF) Best Friends Forever (?). The following is an interesting paper (but very technical).
Efron has an interesting paper on the relationship between frequentist bootstrap procedures and Bayesian posteriors, that is a bit easier to comprehend.
Other articles on the relation between bootstrap and posterior distributions
Newton, M.A. and Raftery, A.E. (1994), Approximate Bayesian Inference with the Weighted Likelihood Bootstrap. Journal of the Royal Statistical Society: Series B (Methodological), 56: 3-26. https://doi.org/10.1111/j.2517-6161.1994.tb01956.x
Newton, M.A., Polson, N.G. and Xu, J. (2021), Weighted Bayesian bootstrap for scalable posterior distributions. Can J Statistics, 49: 421-437. https://doi.org/10.1002/cjs.11570
Hjort, N. L. (1991). Bayesian and empirical Bayesian bootstrapping. Preprint series. Statistical Research Report https://www.duo.uio.no/bitstream/handle/10852/47760/1/1991-9.pdf