Rmsb package blrm function

Hi all,

I have a question regarding the blrm function in rmsb package in R
I found there is some randomness when I use the blrm function to fit a Bayesian proportional odds model. I fit the model using the same dataset. The results are different from what I got months ago. Is there any way I can control this? It seems set.seed does not work.


I think the answer to this will depend on how different the results are between the two runs. Also, AFAIK the rmsb package does Bayesian inference through Stan via brms so it’s worth checking out the documentation over there as well. If it’s just some fuzziness in the coefficient estimates that doesn’t substantially change the predicted outcomes then you probably just need to do some tighter reproducibility control because MCMC (or technically, Hamiltonian Monte Carlo) methods have a lot of randomness by design (see 20 Reproducibility | Stan Reference Manual and Question about the Reproducibility of Stan Results - #6 by jsocolar - Algorithms - The Stan Forums). If the model runs are producing clinically relevant different results I would suggest making sure that the model is fitting properly in the first place – e.g. are you checking Rhat, excessive adapt_delta to prevent divergences after warmup, pairs plots, etc.

1 Like

Thank you so much for your reply.

The model fitting is pretty good (checked the Rhat etc.) The mode of the posterior distribution is pretty stable. I have problems obtaining a stable 95% credible interval as the posterior draws are different.

What confused me most is the fact that if I run my code with my dataset multiple times during the same R session, I always get result A. However, if I run my code with a different dataset first, then run the same code with my current dataset, I got close but different results than result A. It is confusing.

Thank you again for your reply :slight_smile:

It’s good to set.seed(some constant) before using any function that entails randomness (MCMC, bootstrap, etc.). If you want someone else to be able to reproduce your results precisely even with a different random number seed you’ll have to go from the default 4,000 saved posterior draws to perhaps 50,000.

1 Like