I am trying to predict resistance to treatment with a Lasso-Cox model trained on 1435 patients follow for 10 years, 181 events and 33 variables (both continuous and categorical for a total of approximately 50 covariates in the model if we count the dummies).
I have a question about bootstrap optimism-correction for calibration with 1000 bootstrap samples for the developed Lasso-Cox model.
My Lasso Cox has an apparent C-index of 0.66 and an apparent calibration slope of 1.73.
The optimism-corrected estimates are:
C-index_corr = 0.66 - 0.15 = 0.51
Calibration slope_corr = 1.73 - 1.77 = -0.04
I understand that the estimate of the calibration slope has such a large optimism because Lasso-Cox is a biased estimator. So, how would you interpret this result? Is it sensible to assess calibration in this case? Would you re-calibrate the model?
Furthermore, a larger optimism-correction was observed in a random survival forest for both performance measure using the same data. This led me to this additional question: is optimism-correction valid for machine learning methods?
May it be that I am fitting machine learning methods to small sample size relative to the number of features, which produce unstable models (see for example van der Ploeg, T., Austin, P.C. & Steyerberg, E.W. BMC Med Res Methodol (2014) 14: 137).