I recently read here in datamethods this discussion on prior distributions:

https://discourse.datamethods.org/t/580

I’m impressed with the range of priors that have been proposed for use without reference to specific scientific problems, from cauchy(0, 2.5) to normal(0, 1/4). I find it a considerable range of skepticism!

At what point does it not suffice to say “with weakly informative priors”?

My question is more about transparent reporting of research methods than how to analyze data. On the one hand, more disclosure ought to be better. On the other hand, at some point more information becomes too much information. In example, I never read in papers using frequentist GLM that LM estimates were found with IWLS optimization.

Thanks in advance!

2 Likes

My two-cents: *always*. If there are lot of parameters and thus a bunch of priors, send them to the Supplementary Material/Appendix with a clear reference in the main text. The point is that someone should be able to reproduce your analyses exactly, at least in regards to the construction of the statistical model. Moreover, as it pertains to priors, someone should be able to critique your priors in terms of probabilistic statements, e.g., “this choice of priors makes \operatorname{Pr}(R_0 > 100) = 1/2, which is not reasonable”.

On the IWLS-Frequentist analogy, I don’t think that’s exactly a fair comparison. Inferences are often sensitive to prior specification, but seldom to the choice of optimisation algorithm. Besides, this confuses specification with computation, which is sure-fire way of getting things wrong.

1 Like