Some thoughts on uniform prior probabilities when estimating P values and confidence intervals

Wrong. Your formulation does not account for prior uncertainty, as you are missing a term on what I call the measurement scale, that incorporates sample sizes for the prior.

You missed this with the insistence on deriving things on the measurement scale.

The way i understand EvZ’s R code for the Bayesian credible interval for the effect is by thinking of the Bayesian computation as a data augmented combination of standardized Z scores. Sander Greenland describes using data augmentation to perform Bayesian computations with frequentist software in this post:

In this setup, on the standardized scale, information is represented as a shift from our sampling model N(0,1) in units known as probits. The uniform, improper prior amounts to assuming a previously conducted study with a result N(0,1) Probits are additive, and our normality assumption allows us to treat our prior and observed data as normally distributed random variables.

By the rule of summation of random variables, our Bayesian posterior after the first study is the sum of 2 normal random variables N(\theta, \sigma^2)
Prior: N(0,1)
Data: N(z,1)
Posterior: N(z,2)

To get back to the standardized normal distribution scale with a variance of 1, we have to divide by the square root of the variance. So our credible interval after the first study is:

Standardized scale: N(z,\sqrt{2})

Our Bayesian prediction interval on the standard normal scale, assuming a uniform prior, conditions on the credible interval, but adds a variance term.

Prior (for replication and after first study): N(z,2)
(Pseudo) Data: N(0,1)
Posterior: N(z,3)

Posterior Predictive Distribution: N(z,\sqrt{3}) by rule of addition of normally distributed random variables then standardizing.

See also:

Cross Validated: Prediction Interval = Credible Interval? – The first answer has a good answer that explains the distinction in Bayesian analysis.

2 Likes