It is said that probability predictions are their own error metrics. But what about the areas of the predictive probability domain that are imperfectly calibrated? How do we incorporate what we know about calibration imperfections into the probability predictions that we enter into our utility functions for decision making?
I’m not sure this is a difficult problem. If you estimate a probability of disease of 0.2 and you act as if the patient has disease, when the true probability is 0.3, you’ll be wrong 0.7 of the time.
1 Like
Thank you for confirming my intuition!