Thank you all for the great replies! In the meantime i read through a related thread on credible and sensible priors (here) which was a much more technical discussion and I felt that the option to use expert input gets discounted often and pretty much straight away due to cognitive biases. The 2019 O’Hagan paper suggested by @Pavlos_Msaouel is also referenced there and I found it a fascinating and enlightening read which answered some of my questions directly (eg there is already a Delphi-type protocol for eliciting expert consensus about prior distributions, however the authors case study uses a different protocol). An important point made in the paper is that these predictions can almost never be tested against some “truth” and henceforth no gold standard really exists. But there are standardized and scientific ways to conduct such a process.
@Pavlos_Msaouel Your process seems sound and I guess that the priors you end up with wouldn’t differ all that much from the result of a formal elicitation as described by eg. O’Hagan. My point is that it still lacks external validity wherein other experts may call them into question, especially upon seeing your results (such as the Ioannidis example). Imagine if the Sepsis-3 task force, alongside the new definitions, published a consensus-based prior distribution for 28-day mortality for a novel intervention in septic shock after conducting such a formal elicitation process. Not only would Ioannidis’ comment be unfounded but it could also have provided support for the ANDROMEDA group to conduct a Bayesian analysis in the first place (and potentially “strong-armed” JAMA, as they would likely have had to accept a statistical analysis which was instigated by an earlier publication in their journal).
@ESMD The few published Bayesian studies I have seen have all used a range of priors, from skeptical to optimistic, which is of course desirable but not satisfactory, as evidenced by Ioannidis’ comment (skeptical is not skeptical enough) and readers will not be able to make a judgement call about that.
Your point on CPGs is very interesting. I initially only considered the impact such expert consensus distributions could have on the validity of later clinical trials, but it would indeed provide a good starting point for the next revision of the same guidelines, a basis for evaluating the studies published in the interim. And an added benefit of choosing one (or maximum a handful of) parameters (such as survival analysis until day 28) would be to make the studies more uniform and meta-analysis easier.
@f2harrell Thank you for your comment and the reference, fascinating to read about the use of the same protocol described by O’Hagan used in a large pharma company. Regarding your last comment, I might be wrong but I have the impression that people reading this forum already feel good about Bayesian stats and even the wider audience is growing more-and-more wary of the frequentist approached. I think that the challenge today is a tactical one: how to make Bayesian more available and easier to use for a wide audience. In addition to providing the masses with the much needed education in Bayesian methods, as you do with BBR, I think having consensus-based “anchor” priors available could also contribute towards that end.
@R_cubed Your bring up an excellent point, if I understand you correctly, ie. that in addition to the probability of a statement being true it is also important to consider the consequences of it being true. A common clinical scenario comes to mind when the physician needs to consider not only the most likely but also the most deadly potential diagnosis. Is this a good analogy to your point? Because in that case, I believe, it lends supports my argument that priors (and utility functions) should be defined by at least including subject matter experts.