Is anyone here aware of any reading on the approach to an analysis where there is a competing risk between death and complications, but an individual can have any combination of the complications?
For example: I am looking to predict the joint probability of a collection of 20 complications while accounting for death as a competing risk, all predicted by patient characteristics. We have access to a dataset with +20k patients with longitudinal observations over a period of 8-10 years.
In simpler cases, I have simply used a multinomial of state transition model organized with mutually exclusive categories:
no event | complication 1 | complication 2 | complication 1 and 2 | | Death
This seems crude to me, and a little absurd to look at all combinations of 20 complications. Is anyone aware of a more elegant/correct approach? Ease of interpretability isn’t of concern.
This is a great question and I hope you get responses from people who understand competing risks better than I. My understanding is that competing risk models were designed for situations such as multiple types of fatal events, e.g., where you are interested in one specific cause of death but that cause can be “interrupted” by other causes. Software such as Terry Therneau’s R
survival package only allows for a single event code per subject, accordingly.
With 20 complications I’m tempted to convert them to an ordinal severity variable, i.e, rank order the complications and assign the patient the the worst complication they experienced. You could even make death the worst and use one ordinal model. A really interesting take on this is this paper, where complications were scored in how they predict death. If that is not the correct paper, I know the one I’m thinking of was written by the same authors.
I also like state transition models, but I still think you need to reduce the complication space to use them.
i am also inetersted in answers to this Q. I described a random effects approach that allows for different types of events and mortality as a joint frailty, it appeared in Stat Mod this week, although it’s a few yrs old: Stat Mod current issue. I have not used it but the R package named frailtypack sounds quite powerful, not sure if it fits your purpose. As @f2harrell says, they don’t sound like ‘competing risks’ because one type of event doesn’t preclude another type of event from occurring?
This looks like a great article from the abstract, but sadly my institution is not enlightened to Stat Mod so I’m pay walled. Do you have a version you are allowed to share directly?
I guess the idea I’m thinking is that you can’t have any complication if you’re dead, so any complication and death are competing, but I also need to know which events you had so I can add up costs.
Would it be wholly inappropriate to break the economic model into two parts where at each time point we calculate:
- Whether the person is dead, and conditional on being alive whether they had any complication
- Given they had a complication, a separate multivariate model to tally up which ones.
sure: https://journals.sagepub.com/eprint/86qEsHtBJHuZjNRQyr7H/full maybe we should delete the link later, i’m not sure how seriously to take the warning re posting the link on a message board
i see. Rogers and Pocock had a nice paper on joint frailty model where they comapred it to alternatives: Rogers & Pocock et al. joint frailty There will be a lot of chat in the paper that isn’t of interest but skip to section “Jointly Modelling Recurrent Heart Failure Hospitalisations and Cardiovascular Death”
re “I also need to know which events you had so I can add up costs”, we did soemthing similar here: CQO paper, because emergency department visits and hospitalisations incur different costs, thus we treated them as separate but correlated events, using multivariate random effects [edit: and with a joint frailty for death as per rogers & pocock]
there’s also the multi-state approach you mentioned, it has been discussed on the message board before: Question on modeling repeated outcomes among single patients
i have similar data but i am thinking i’ll take the approach described by @f2harrell. I have some reading to do but can maybe update later. Like you we have a lot of data, the denominator is in fact so large that censoring overwhlems everything and i may be forced to take a less interesting route or combine events …
This is great, thank you!