RMS Discussions

After day 3 and 4, I think I kind of getting what is the highlight of RMS course- Bayesian longitudinal ordinal model for clinical data.

  1. Why longitudinal ordinal model for clinical data instead of just using Cox model?
  • Dynamic versus static - with the Cox model, usually, we can only model 1 end-point for the outcome of interests. This does not translate well into clinical practice with patients constantly moving between different stages of the disease. By incorporating ordinal model along with Markov process to handle correlation, it allows us to generate more knowledge with the same trials
    • This would provide more info to inform decisions and maybe solve some arbitrary cut off points. For example on more info, when conducting a trial on hospital treatment outcomes on pneumonia, instead of just having a time to re-hopsitalization, we ignored the nuances happened between and lose power. With ordinal structure, we can nest it in like:
      • Visiting pharmacy for respiratory problem
      • Seeing a GP for respiratory problem
      • Re-hospitalization for respiratory problem
  1. Flexibility of ordinal models
  • Frank posits ordinal model with Markov process as a general case to handle variety of data:
    • “Elegantly handling absorbing states like death within the same model structure**,** which can be challenging for recurrent event models”
    • Can be used on diverse outcomes (binary, discrete ordinal, continuous, mixed)
    • Computationally efficient with current software for the full likelihood
  1. Why Bayesian?
  • I shared Frank’s sentiment on the philosophy of frequentist statistics- repeating the exact experiment many times. In reality, most events and trials are closed to unique events because of circumstances (different time, population and available treatment to name a few). It does not seem to be feasible to repeat the same experiments.
  • “The Bayesian framework provides a continuous bridge between standalone mortality evidence and evidence drawn from the combination of nonfatal and fatal endpoints” - Frank’s quote from “burrowing information on outcomes”
  • Longitudinal ordinal Markov models can estimate clinically meaningful quantities like state occupancy probabilities (the probability of being in a specific state at a given time) or expected time spent in various states (like mean time unwell or restricted mean survival time)
    • Frequentist methods, while they can also estimate these quantities, often rely on approximations or provide less direct statements about the parameters themselves compared to the Bayesian posterior distribution
  • “And the unique contribution of a Bayesian model is the ability to borrow information about covariate effects across levels of the outcome variable, i.e., to constrain how different a treatment effect is on death compared to its effect on nonfatal outcomes.” - Frank comments on this post
  1. Current drawbacks and the road-ahead
  • Every model has its assumptions. The assumptions for ordinal models are:

a) Relationship between X and Y (e.g., linearity/shape of effects, additivity)

b) The “parallelism” assumption on the link scale (that covariate effects are constant across different cutoffs or outcome levels) - Frank said he’s working to overcome b) and I’m looking forward to hearing about this in the future

Solutions to overcome these assumptions:

  • Splines and chunk test to overcame a), and it is a lot of work to have the correct functional form

  • ordParallel, partial proportional odd and Bayesian prior to overcame the b) assumption

  • I think the potential is definitely there and there are a lot of work needs to be done. With Bayesian ordinal longitudinal model, picking priors to incorporate prior data and non-parallelism seems to be more of an art, which would be met with a lot of resistance where things are demanded to standardized.

I’m hopeful. Thank you Frank and Drew for the course again and would love to hear people thoughts on this.

1 Like

Nicely put. Instead of Markov chain we usually say Markov process in this context. And the unique contribution of a Bayesian model is the ability to borrow information about covariate effects across levels of the outcome variable, i.e., to constrain how different a treatment effect is on death compared to its effect on nonfatal outcomes.

1 Like

@license007

Very nice summary

I am still learning, but one of the things I struggle with is how to apply RMS principles into PK/PD world. We have the hidden variable, distribution assumption and assumed additivity of covariate effects etc… Further, the equation of interest an ODE that must be solved numerically.

I am sure with some modification it all can come in but struggling to employ in practice. RMS is great for E-R though obviously.

Interested to get your take on nonlinear mixed effects modeling and how to tackle.

Many of the RMS methods and principles will apply to PK/PD. But be ready for one of the principles to annoy you: correlation structures of chosen model should match the data generating process. Random effects will seldom do that for serial/longitudinal data.

Glad the post was helpful for you Matthew!

I view PK/PD as empirical driven- model from observations and beliefs. I only think about:

  1. Compartment models (oral drugs, for example: gut → systemic → liver → target) (I think the correlation part @f2harrell mentioned is here?)
  2. Association/disassociate rate (binding and tissue distribution)
  3. Half lives

I solved ODE through Python packages, have never modeled PK/PD in R before.

I don’t think about hidden variable, distribution assumption very often… because of the approach above has served me well enough. To be specific, I model drug distribution to achieve the minimum viable dose for patients. There could be a lot of variety of PK/PD models.

Maybe I should look into what you outlined, do you have any pointers for me to read?