How to draw calibration / decision curve in external dataset?

Dear Professor and everyone who may help,

Now, we want to build a prediction model by logistic regression in training dataset, and then tried to validate its performance in the external datasets, some questions about decision curve of external validation:

1.The of calibration curve in internal validation seems easy by using rms, like:

f <- lrm (outcome~ bio1+bio2+bio3,  data=data_train, x=TRUE,y=TRUE)
cal_train <- calibrate(f,B=1000)

if I want to apply this model to external dataset and draw decision curve, should I delivery the predictions of validation dataset calculated by the fitted model (from training dataset), but the function calibrate seems only receive the model as parameter and predictions is unaccepable ?

2.For searching solutions. I saw some code about calibration curve of external validation, after calculating the prob of external validaton dataset by the fitted model f (from training dataset), they fit a new lrm model f_val_ex for outcome ~ prob_external_validation to draw calibration curve of f_val_ex rather than directly drawing the calibration curve by the probs and the targets. I am confused that why we need to fit a new nodel f_val_ex for outcome~prob ? And now, the calibration curve was generated by the new fitted model f_val_ex instead of f from our training dataset, Is it for calibrating the under-fitting prob ?
the confused code like:

pros_external_validation = predict(f, newdata=data_external_validation)
f_val_ex<-lrm(outcome~pros_external_validation ,data=data_external_validation,x=T,y=T)
cal_val_ex <- calibrate(f_val_ex, B=1000)

In my opinion,
if we could only delivery the fitted model to the function calibrate, we should build a linear model and set the coef of prob to outcome to the 1 and the intercept to 0,
the code should like:

pros_external_validation = predict(f, newdata=data_external_validation)
f_val_ex<-lrm(outcome~pros_external_validation ,data=data_external_validation,x=T,y=T)
cal_val_ex <- calibrate(f_val_ex, B=1000)

Additionally, I also want to draw a decison curve in the external dataset, but the fucuntion of plot_decision_curve seems also receive model rather than the predicted prob, how can I draw a decision curve for validation ?

baseline.model <- decision_curve(outcome ~ bio1+bio2,  data=data_training, thresholds = seq(0, 1, by = 0.01), bootstraps = 1000)
plot_decision_curve(baseline.model,  curve.names = "Baseline Model",cost.benefit.axis =TRUE,col= c('red','blue'),confidence.intervals=TRUE,standardize = TRUE)

I have trapped in theese questions for days, any kind reply will be greatly appreciated!

This doesn’t do decision curves but does do external validation: val.prob function in the rms package.

Thanks! we had tried the val.prob , but we doubt that whether val.prob also adopts the bootstraping in the validation like function calibratre ?

Resampling (which includes bootstrapping) is not used when doing external validation.