Hi everyone, I want to share some thought and ask for some of your own.
I have a deep interest in three main dimensions of prediction: Information loss (time and outcome dichotomization), Counterfactual Prediction, and Performance Validation in the context of decision making.
My main intuition is that there is no good reason for not trying to overcome all of the above.
Information Loss: While binary outcomes are extremely popular I see no reason to avoid time-to-event, competing-risks/multi-state and longitudinal models or avoiding ordinal/continuous outcomes.
Counterfactual Prediction: While it is fairly easy to create prediction models nowadays I still find them very difficult to interpret under the “factual” context. I don’t think that any clinician really thinks in terms of so-called factual prediction, we do not prioritize patients according to their so-called “absolute-risk”, we do so by evaluating the implied predicted risk reduction and/or the implied predicted treatment harm.
Performance Validation in the context of Decision Making: Prediction performance metrics without context are useless and misleading. That’s why I’m a big fan of decision curves, they respect the narrative of the domain expert and they require a price in terms of real-life consequences.
Stating all of the above, while I have some solutions for the problems mentioned I’m not familiar with one solution for all of the above.
I would like to predict longitudinal counterfactual prediction and validate accordingly in terms of decision-making.
Counterfactual Prediction for the longitudinal ordinal setting would look like:
0 - Alive
1 - Sick
2 - Dead
U(no-) = 2 *
U() = 3 * + 6 * + 3 *
We can have different utility values for treatment, days of being sick and days of being healthy accordingly. The expected difference is straightforward and there is no need for puzzling heuristics such as playing around with Calibration, Discrimination etc…
Are you familiar with related models? I wonder what is you perspective on the subject.
It’s worth pursuing. But an example of the difficulty of this approach as opposed to an ordinal longitudinal model is the complexity of ordering a late death vs. an early heart attack with the summed utility approach. Missing data are also problematic.
How would you relate to late death vs early heart attack in the context of ordinal longitudinal modeling?
I don’t think that prediction models are making any kind of utility interpretation directly, but they do so by using performance metrics and inclusion-exclusion criteria of the target population.
c-index for time-to-event implies that the earliest events should be prioritized.
lift implies that patients at the highest risk will gain the highest benefit.
Brier Score implies that the absolute differences between the predictions and the outcomes are equally important for nonevents (p = 0.3, y =0) and events (p = 0.7, y = 1).
And so on… Sometimes these assumptions are reasonable and sometimes they are not, but I think that it will be much easier to interpret and communicate utility under a counterfactual setting. For lift we can use the uplift setting which is very natural to marketing profiling but I do believe that is relevant for healthcare as well.
My main takeaway is that we should strive for alignment between narrative thinking that comes from domain-experts / decision-makers of all sorts and the underlying assumptions behind performance metrics. I used to think that we should train clinicians to play poker in order to improve their probabilistic thinking, but the following thread changed my point of view: