RMS Multivariable Modeling Strategies

That’s a reasonable interpretation though I think you might sometimes reach the opposite conclusion. Better would be to use 4.1.2 after doing chunk tests of groups of collinear predictors. (But what to do with the results? Perhaps use sparse PCA instead of all this?).

1 Like

Thank you for the reply Professor, much appreciated!

Hi Prof Harrell,

I want to confirm my understanding about RMS 4.12.1 Developing Predictive Models.

In this section, you said that we should use a a single p value to test the entire/non-linear/interaction.

For example, if a single predictor has a > 0.05 P value, we should NOT delete it, we need to check the global P value. For an interaction term, we should observe the global interaction P value, if the global P < 0.05, we should NOT delete any interaction term, even there is an interaction term with a >0.05 P value.
(Prerequisites: variables and interactions are based on pre-made assumptions, that is, if there is a physiological mechanism for the interaction between two variables, an interaction should be included)

I have marked my understanding in bold at the end of your notes. In summary, we should not exclude any interaction or predictor just because its p>0.05, we should focus on the global p-value

8 Can do highly structured testing to simplify “initial” model
Test entire group of predictors with a single -value (⇒ TOTAL P value in ANOVA test)
Make each continuous predictor have same number of knots, and select the number that optimizes AIC
Test the combined effects of all nonlinear terms with a single -value. (⇒ TOTAL NONLINEAR P value in ANOVA test)

  1. Check additivity assumptions by testing pre-specified interaction terms. Use a global test and either keep all or delete all interactions (⇒ TOTAL INTERACTION P value in ANOVA test)

Is my understanding correct? Thank you very much.

Your understanding is correct up to a point. There is a serious question of whether you should ever remove part of the model just because it’s “non-significant”, which actually means very little in any context. What is particularly bad though is removing part of a variable that spans multiple parameters, i.e., removing a single “insignificant” parameter that is connected to other parameters in the model.

With regard to interactions, since they are so hard to estimate we need to move to a Bayesian approach where priors are carefully specified so that interaction terms are “half in and half out” of the model.

Thank you very much :grinning:. Do you have an example of the Bayesian approach you mentioned for dealing with interactions?

See this.

1 Like

Prof. Harrell,
Is there an option to fit negative binomial model in RMS? I could not find the option in ‘family’. The stackoverflow forum suggest to use:

library(rms)
library(MASS)

Glm(counts ~ outcome + treatment, family = negative.binomial(theta = 1))
##General Linear Model

rms::Glm(formula = counts ~ outcome + treatment, family = negative.binomial(theta = 1))

I think the answers in stats.stackexchange.com are pretty complete. I would have to implement something special for \theta for rms to give you a general solution. Code contributions welcomed. In the meantime try to use glm or one of the other functions along with the effects or marginaleffects packages.