RMS Discussions

I was referring to no workaround for a contrast of survival probabilities at a fixed time.

For lrm I did not implement ‘no intercept’ models.

2 Likes

Sorry for replying to this earlier post.

I read this and want to confirm that you are replying to this question,

Can you (or is it appropriate to) calculate a p-value from this adequacy index to quantify the difference between the models?

you mentioned the way to calculate p value:

You can compute p for (1+2) - (1) and (1+2) - (2).

Is it implemented by running anova(fit, test = “LR”)?

For example, I make a example. f <-cph(S ~ age +sex, x=TRUE, y=TRUE)

anova(f, test = "LR")
                Likelihood Ratio Statistics          Response: S 

 Factor     Chi-Square d.f. P     
 age        0.81       1    0.3673
 sex        0.07       1    0.7965
 TOTAL      0.85       2    0.6548

The p-value of the difference between age +sex and only age is 0.7965.

The improvement/adequacy index is 0.07 / 0.85 = 8%.

Is my understanding correct? And would you recommend showing this p-value with adequacy index ? Thank you very much.

This kind of inconsistency of the two measures is likely to happen in small samples, just as you can get a large but “insignificant” increase in R^2. If you computed a compatibiity interval for the increase in the performance index you’d see the uncertainty interval is pretty wide.

1 Like

Thank you.

How do I calculate the compatibility interval of the adequacy index?

You can do the bootstrap ‘manually’ in R, or use relative explained variation instead for which the rms rexVar function works with the bootcov function to get bootstrap uncertainty intervals.

Hi there,
I need to subset the Mean|error| and the 0.9 Quantile information that is shown in the calibration plot of cph models.
For my case, it is a cox model and the plot looks like this:

96b6379e-d5e2-4025-84da-e5e6e87695de

I’ve seen this columns in the matrix,

colnames(calibration_sparse_avas_trans)
[1] “pred” “index.orig” “training”
[4] “test” “mean.optimism” “mean.corrected”
[7] “n” “calibrated” “calibrated.corrected”

but I do not know how to subset or compute the Mean|error| and 0.9 Quantile. could you please help me to get these values?

Thank you in advance
Marc

I wish I had capture those computed quantities and not just printed them. The best bet is to write a little function that takes the object created by calibrate() and does part of what is in rms:::print.calibrate. You’ll see where the err vector is computed, for example, then where the mean and 0.9 quantile of absolute err are computed. Instead of printing, return those pieces as the new function’s result.

1 Like

Thank you, it worked well.

Just as information, I had to change the quantile calculation, using abs(err) instead of err. With the code shown by rms::print.calibrate (attached) it doesn’t compute the correct quantile value shown in the plot.

else if (length(predicted)) {
s ← !is.na(x[, “pred”] + x[, “calibrated.corrected”])
err ← predicted - approxExtrap(x[s, “pred”], x[s, “calibrated.corrected”],
xout = predicted, ties = mean)$y
cat(“\nMean |error|:”, format(mean(abs(err))), " 0.9 Quantile of |error|:",
format(quantile(err, 0.9, na.rm = TRUE)), “\n”, sep = “”)
}
93340b9a-3bbe-428b-8805-a5ca30db7611

Modified code:

err ← attr(calibration_sparse_trans, “predicted”) - approxExtrap(calibration_sparse_trans[sss, “pred”], calibration_sparse_trans[sss, “calibrated.corrected”], + xout = attr(calibration_sparse_trans, “predicted”), ties = mean)$y
cat(“\nMean |error|:”, format(mean(abs(err))), " 0.9 Quantile of |error|:", + format(quantile(abs(err), 0.9, na.rm = TRUE)), “\n”, sep = “”)

Output:
Mean |error|:0.03250078 0.9 Quantile of |error|:0.06672416

Regards,
Marc