Why competing-events are mentioned only in time-to-event framework?

I’m curious if anyone else has thought about this issue:

Competing events are related to the time-to-event framework, but I don’t recall ever seeing them mentioned as a problem in the binary setting for prognosis prediction models.

Competing events in the binary setting force us to use some kind of heuristic:

  1. Count competing as a form of non-event (2 → 0)
  2. Count competing as a form of event (2 → 1, i.e., composite)
  3. Exclude competing events from the analysis

All of these heuristics are far from ideal, but if a data scientist wants to use an ML algorithm, they must choose one of them.

From my point of view, the best approach would be to do a sensitivity analysis for the performance metrics under all of the heuristics and see if the metrics are sensitive to them.

My intuition is that competing events in the binary setting are much more of a problem than people would like to believe.

1 Like

I agree; it’s just that people have devoted more time to this in the continuous-time setting. It’s a lot easier when timing is not of interest. Then one can use polytomous logistic regression or something similar to model the probability of all the mutually exclusive outcomes. Then use the estimated parameters to compute the probability of any event or combination of events of interest. This way of thinking generalizes, when time is involved, to multistate transition models which have the same advantages of estimating probabilities of interest using observables and not involving the indirectness of competing risk analysis.

Just as multistate transition models have many advantages over competing risk analysis (as shown elegantly by Terry Therneau in one of the vignettes to his R survival package), polytomous/multinomial or ordinal regression has advantages in the simple case.

4 Likes