I have couple of questions regarding how to use the calibration optimism method in context of the bias correction method for model validation?
Just to provide some context to my question -
Calibration optimism calculation is similar to the bias correction method using bootstrap but applied on calibration curve instead, this method is mentioned at [calibration bias](splines - How to estimate a calibration curve with bootstrap (R) - Cross Validated].
Do we need calibration bias correction, if we are already doing the bias correction using bootstrap optimism? If yes, what calibration function should we use if we use this model in clinical practice on new data - the corrected calibration function or the non-corrected calibration function?
In theory, should the calibrated probabilities corrected by calibration optimism give the same performance statistics as the optimism bootstrap estimates?