# How to model an outcome measure bounded between 0 and 100 using Bayesian analysis

I’m trying to create a Bayesian regression model for the purpose of parameter estimation. The outcome PROM that can take any value between 0 and 100. I initially created a model in jags
mod1_string = " model {
for (i in 1:length(y)) {
y[i] ~ dnorm(mu[i], 1/25)
mu[i] = int +
b_1 * continuous1[i] +
b_2 * continuous2[i] +
b[1] * ordinal1[i] +
b[2] * ordinal2[i] +
b[3] * ordinal3[i] +
b[4] * ordina4[i]
}
int ~ dnorm(-5, 1.0/25.0)

for (j in 1:10) {
b[j] ~ ddexp(0, sqrt(2))
}

b_1 ~ dnorm(0.06, 1/1)
b_2 ~ dnorm(0.03, 1/1)


} "
The values for my continuous priors were taken from the literature. My ordinal priors were set to be double exponentials centered on 0. What I would prefer to have is a model that is also bound by limits 0 to 100. Should I scale my outcome to be between 0 and 1 and then use a logit on mu? In which case how do I choose the distribution of Y?

2 Likes

What about Bayesian semiparametric regression? The R rmsb package blrm function will fit a proportional odds model to a continuous Y. See here for detailed examples.

1 Like

Thanks. I thought it might be but then I have trouble understanding how to set the shape parameters even after reading the linked pdf.

Thanks. I’ve reviewed your notes on this from both the BBR and RMS course and also run through some of your examples. I ran a model using blrm on my data and got the following warning
“Some Pareto k diagnostic values are too high.”
I realise from the help file that this has something to do with LOO. My knowledge about LOO is pretty much limited to what it is “Introduction to Statistical Learning” by James & Co. Should I be concerned about this? The Rhat values are all good as are the effective sample sizes and graphically the posterior chains demonstrate good convergence.
Also am I correct in understanding that I cannot change the prior other than the sd?

1 Like

I don’t think you need to worry about that. You have choices for priors for intercepts and SD of random effects. For \beta's you only get to choose the SD and you need to be cautious of when some variables are combined in the orthonormalization phase. There is an argument to keep selected covariates separate so they’ll have identifiable normal priors.

1 Like

Best quick resource for interpreting LOO warnings at the moment is here. It is maintained Aki Vehtari who is one of the people who designed the PSIS-LOO method for fast CV from Bayesian models.

3 Likes