I was wondering whether someone more experienced in this , could shed some light into a question about model calibration.
I have two models (let’s call them model1 and model2) which were developed in two independent datasets (this was done not by choice, but by design: one of the models came from the literature and the source data are not available).
When I applied these models to (the same) validation dataset, I get two different calibration curve patterns:
a) the first model has a small amount of bias when I run the linear regression of the predicted values v.s. the observed ones), but the actual graph is a straight line with a regression slope of 0.92
b) the second model has smaller bias but its calibration slope is 1.47
My hunch is that former model may be a better way to go forward for future work, assuming that one can find a way to calibrate it. Is this the correct conclusion?