The “specific situations” cover every single problem I encounter in med and health controversies. Every linear, logistic and Cox regression I see has at least a partial-Bayes analog. Conversely every single Bayesian analysis is directly mappable to a random-parameter frequency model, which can be used to calibrate any proposed posterior computation via simulation from that model.
My impression is that, for some purely psychosocial reasons, extremists on both sides ignore or forget these mappings, which again apply to all the situations I think anyone here (struggling with mere terminology) encounters in real research (not flashy specialty apps in JASA or Biometrika).
My impression is that this failure on both sides is what split statistics into this crazy frequentist/Bayes divide. And that this split has been beyond destructive to thinking , writing, and teaching clearly (inseparable goals) about applied statistics. If the study is at all important and its goal is clear communication of what was observed and what it might portend, both frequentist and Bayesian interpretations need to be brought up. In doing so my ideal (not always attainable perhaps) is to develop an analysis model that can be defended as a useful summary of what is known, both for information extraction and prediction, and thus be acceptable from broad frequentist and Bayesian views. Meaning I listen closely to extremist criticisms of the other side even though I don’t buy their claims of owning the only correct methodology…
“But Bayes in total makes fewer assumptions than frequentist. That’s because frequentist requires a whole other step to translate evidence about data to optimal decisions about parameters. This requires many assumptions about sample space that are more difficult to consider than changing a Bayesian prior.”
That’s all just flat wrong in my view. Top frequentists I know say just the opposite, e.g., that Bayesian methods add crucial assumptions in the form of their priors and hide all the sampling-model complexities from the consumer in a black box of posterior sampling. Plus to top it off, Bayesian conditioning discards what may be crucial information that the model is wrong or hopelessly oversimplified (see Box onward).
I see all those problems, and I have seen some Bayesians in denial about or blind to them. Of course, I have seen some frequentists in denial about or blind to real problems like the subjective nature of typical sampling models in applications; that makes uncommited use of frequentist methods as subjunctive as uncommitted use of Bayesian methods (subjunctive, as in hypothetical, conditional, and heavily cautioned)
“I think our fundamental disagreement, which will not be fixable no matter how long the discussion, is that you feel you are in a position to question any choice of prior distribution, and you are indirectly stating that a prior must be ‘right’. I only believe that a prior must be agreed upon, not ‘right’, and I believe that even a prior that is a bit off to one reviewer will most often provide a posterior distribution that is defensible and certainly is more actionable.”
Sigh, you may be right it isn’t ‘fixable’ but I see that as stemming from the fact that I am not saying all that (I’ve heard it as you have - but not from me). I thus don’t see you as understanding what I see myself as trying to communicate (on the topic of this page): That I see the bad language framing and exclusivity of thought that has plagued all of statistics from some of the highest theory (e.g., Neyman, DeFinetti) to the lowliest TA for a “Stat 1 for nonmath people” course, as well as publications.
You noted these issues go deep - well it’s deeper than the frequentist-Bayes split, which I think has become an artificial obsession based on exclusivist thinking in mid-20th century “thought leaders.” One can only wonder what the field would have been like if (for all sides) common practice had stemmed from Good, instead of from Neyman or Fisher for frequentists and from Jeffreys or DeFinetti for Bayesians. Well thankfully one applied area did have Box to talk sense: That both sides talk about the same models of the world, the only split is that one is obsessing on Pr(data|model) and the other on Pr(model|data). But there are plenty applications (e.g., all of mine) where you need to consider both (albeit usually at different stages of planning and analysis).
Now, every stakeholder has a right to question your prior and your sampling model (your total model) as well as whatever loss function underlying a decision (explicit or implied, there is always one behind every decision) and whatever technique was used to merge those ingredients to produce P-values, posterior probabilities, or whatever. Isn’t our job as statisticians to help make all those ingredients as precise, transparent, and clear as possible, so that the criticisms can be anticipated and addressed? And to work with our collaborators to present a compelling rationale for our choices - ideally we’d try to make it compelling to anyone, frequentist, Bayes, or other interested party. And we’d do so knowing we may have made some choices that may look poor in light of facts we did not realize, leaving our results open to criticism on these grounds. We only err when we don’t take those criticisms seriously, for that is when we fail to walk back our inferences or decisions when this new information would call for that under the methodologic rules we claim to follow (e.g., deductive consistency).
For better or worse, however, to err is human and we also have to mitigate many errors handed down to us from authorities, like the misuse of English (e.g., using “null” for any hypothesis instead of a zero hypothesis), and the belief that the terms “frequentist” and “Bayesian” should refer to anything other than techniques (as opposed to philosophies) when applying statistics (as opposed to arguing about philosophy).