RMS Discussions

Thank you and Drew very much for all these answers and looking forward to discussions about chunk tests and more!

1 Like

Hi @f2harrell,

Regarding my question about imputing missing data for a new data using a fitted object imp=aregImpute(). You said that it is not possible because the imputation model should include the outcome variable. But when we are doing external validation, we do have the outcome variable. So, my question is: does it make sense to impute the missing data in the validation data using the imputation model trained using the derivation data set? Or would recommend to fit an imputation model to the validation data separately?

Would it be possible to allow the aregImpute() function to perform something like imp.newx = predict(imp, newx)?

Thank you,
Robson

I’d also like to hear more about the advantages/disadvantages of hierarchical (w/ random effects) v. the nonlinear fixed effects models described so far.

I frequently get stuck on the recommendation to 1) multiply impute in the presence of missing data and 2) account for all uncertainty in the variable selection/model building steps via the bootstrap (or maybe cross-validation), but that there is really no “gold-standard” way to do these two things in tandem.

I came across this paper in a recent lit review and it seems helpful, but I also don’t know enough to critically evaluate it

I’m aware of other several simulation studies that recommend approaches similar to the article above such as stacking or grouping, but am curious whether anyone has a “favorite” reference on this topic?

Does anyone have a reference for best practice using propensity scores? This is new to me, but we do work with high-dimensional data, where adjusting for other variables would be highly desirable.

RE: Multiple Imputation with Bootstrap.

There was a thread on this a few months ago with some good references in the comments.

In a different forum, a practicing statistician indicated to me that bootstrapping, then imputation was generally best. There do exist times when it is possible to get comparable results (with less computational cost) doing imputation first, then bootstrapping, but care needs to be exercised to make sure the variance is not an under-estimate.

1 Like

Thanks @R_cubed!

Perhaps I am over-complicating it or missing the point completely, but I think the references provided (though helpful!) are largely based on bootstrapping 95% CI for inference, whereas I am interested in using the bootstrap for internal validation to estimate optimism of model performance.

Intuitively, I am struggling to wrap my head around how bootstrapping first and then imputing would work in a prediction context, especially if one wants to somehow build in the uncertainty of model selection into the bootstrap process. I need to give this some more thought.

1 Like

I understood from today’s lecture that one cannot look at the distribution of Y prior to analysis in order to determine an appropriate transformation for the model (e.g. log, sqrt). Can you clarify why? Does this use up “phantom degrees of freedom” if it is not informing my X?
In practice, how would you recommend the implementation of Gls without knowing the distribution of Y? If in reality Y was heavily skewed, wouldn’t the results be accurate? Am I misunderstanding something?
Thanks,

I have a question about the item 3 & 4 in 4.12.1 Developing Predictive Models.

Assume we collect the data about X and Y in different time or we collect several waves of X. Do we need to consider the potential calender time issue in the multiple imputation process?

Actually, I did not understand the spirit of using X and Y to imputate X and Y becasue we avoid using the information from Y in the data reduction of X. Following this spirit, we might also avoid using the inforamtion from Y to imputate X?

Thank you so much and wish you have a great night!

We have talked about categorization/dichotomizing a continuous variable is a not a good idea. I’m wondering what would be a good way to handle age variable when we have de-identified data where we don’t know the exact ages of people that are older than 90 years old? Using age as an ordinal variable in this case does not seem like a good idea as we would lose information. Is there a good way to handle this situation? Would using multiple imputation for ages that we don’t exactly know be a good idea in this case?

Thanks!

1 Like

Actually, my team is thinking about integrating model approximation into calibrate and validate by redoing model selection for each bootstrap so any information about the problems you or your student ran into (or speculated) would be valuable! Thanks!

Anything’s possible! As things stand now you would just impute the outcome then impute the X’s. Of use one of the other methods here.

At first glance the paper looks useful.

I assume that if you are working in high dimensions that your number of subjects must be large. Then you may not need propensity scores but could directly adjust for covariates.

It’s a touch question. You can’t look “too much”. This relates to model uncertainty, e.g., your final confidence bands don’t know about the earlier abandoned transformations. The Faraway paper covers this. This relates to my quote form the initial part of the course: using the data to guide model specification is almost as dangerous as not doing so.

It is required to use Y to impute X. Regarding calendar time, this would be an ideal variable to include in the imputation model, and is a necessary one in all likelihood.

A very good question. I’ve taken the easy way out and just leave them at age 90. It would be a little better to impute to ages above 90 but you would then have to make a parametric distributional assumption because you would be extrapolating.

2 Likes

There is a nice discussion of colliders, mediators, and the Table 2 fallacy
here: https://discourse.datamethods.org/t/does-the-table-2-fallacy-apply-in-this-nejm-paper

3 Likes

Hi @f2harrell,
I couldn’t find the reference about propensity scores as a penalised method in your list of papers. You said you have a list of papers for observational studies. Can you please point me to the link you mentioned. Thank you!

1 Like

Thanks, @f2harrell, for referring others to this discussion. I appreciate @PerPersvensson’s comments, suggesting that maybe there’s something else going on.

1 Like

In the course, you mentioned in a repeated-measure study, baseline Y should be a covariate. How would you proceed if there is daily dosing and samples are taken several times within a day at different days. e.g. you have Day1 predose, 4hr, 10hr, and 24hr and Day20 predose, 4hr, 10hr, and 24hr. Intuitively I would’ve let predose be in the RHS and allow interaction between hr (continuous spline) and Day (categorical). But if Day1 predose is an adjustment variable, how will we treat Day20 predose?

I don’t know of a paper that has laid this out. This is akin to the fact that you can write principal component analysis as a kind of penalized estimation. I wish I had more.

A great question. I don’t have experience with that “new baseline” design. It may be covered in @Stephen 's amazing Statistical Issues in Drug Development book or in Steve Piantadosi’s clinical trials book.

Age is right-sensored at 90 years. You could use a model dealing with censoring.

If you are interested in the Propensity Score, you might find our paper (by Erika Graf, Angelika Caputo and me) of interest: https://pubmed.ncbi.nlm.nih.gov/18058851/
It is also treated in 7.2.13 of my book Statistical Issues in Drug Development and is the subjet of this slideshare https://www.slideshare.net/StephenSenn1/confounding-politics-frustration-and-knavish-tricks

1 Like

We don’t have as many models for censoring of independent variables.

I was hoping you would chime in Stephen.

1 Like