Mixed models modelling

Imagine an RCT, one intervention and one control group, with 5 timepoints, ~270 individuals for whom data are available. I fit a mixed model in R looking like this:

Outcome ~ time + group + time:group + baseline_outcome_value + center + covariate 1 + covariate 2 + … + (1 | SubjectID). So i got my categorical timepoints, group, time*group interaction, baseline adjustment for the outcome, controlling with a fixed effect for centers (see below), plus my covariates accounting for outcome heterogeneity. I got a random intercept model, but did not yet include a random slope.

Now I’ve got the following questions:

  1. First of all, I’d like to thank Dr. Harrell for pointing out the problems of “change from baseline”, which I read about in high-impact journals and thus replicated. Luckily, before publication, I now changed my approach considering the raw values of my outcome and adjusting for baseline outcome variable (besides other covariates).

  2. I’m unsure how to model the time:group interaction. First of all, I’m asking myself whether this interaction is needed (i.e. obtaining treatment effectiveness for each time point seperately). Now I read here that I need to include a random slope for time if I did include the time:group interaction, as I’m using a cross-level interaction (level2: subjects, level1: timepoint). As I’ve read publications with both including and excluding the time*group interaction, I wanted to know about opinions on that and the inclusion of a random slope in the model.

  3. I’ve got 7 centers which recruited patients. The centers have highly uneven numbers of patients recruited (ranging from ~5 to 80) and the allocation rate does not fit 1:1 in each centre (due to chance). I’ve read here that the question arises on how to model these effects. In my case, I decided to use a fixed effects model, thus accounting for effects of center. However, I’m asking myself 1) if the fixed effects approach is better than the random effects approach in my case, 2) what the fixed effects approach would mean about generalizability of the results (in the paper linked it states that there is no generalizability given beyond the centers included in the trial, if fixed effects are used, yet my centers are pretty heterogeneous), 3) whether my above model is a more appropriate choice or if the model containing the interaction of center and group is better:

Outcome ~ time + group + time:group + baseline_outcome_value + center + center:group + covariate 1 + covariate 2 + … + (1 | SubjectID).

Given the power issues arising from center:treatment interactions (thus setting alpha to .10 in some trials, see link 2), I wonder in how far this model would make sense as it now includes two interactions, time:group and center:group, as I’ve attended Dr. Harrell’s course in which he explained the little power of even one interaction in most models.

You’ve raised lots of issues. Here are some thoughts related to some of them.

  • A random intercept model is unlikely to satisfactorily fit the correlation pattern. See this. Adding random slopes will make it fit better but that induces a still unlikely correlation pattern and the model will have too many parameters to be stable.
  • Model correlation directly using AR(1) or a Markov process.
  • I don’t worry about variation of allocation ratios across centers very much
  • The number centers is right at the borderline of needing to use random effects for them
  • Treatment \times time interaction is usually needed, with the main question being whether it should be linear or not. Trying to prioritize a one degree of freedom contrast for treatment effect (usually at the last follow-up time, a point in the model, or some kind of average) is usually done, to focus power.
  • Regarding generalizability I hope others can answer that. My initial guess is that random effects for centers allows you to make predictions for hypothetical new randomly chosen centers but that the treatment effect will still depend somewhat on the choice of centers (although covariates handle the most important souce of heterogeneity—the patient).
  • I would not put treatment by center interactions in the primary analysis but make this a secondary (low power) analysis of treatment effect consistency over centers.
1 Like

Thanks, that helps a lot. If i get the linked text right, the AR(1) and Markov process would model the correlation in residuals better than a random intercept/slope model because 1) time points are exchangable such that there is no forward flow and 2) the correlation between 2 timepoints does not decrease with greater time gaps in the RI/RS models. Unfortunately, I’m stuck with the mixed models approach due to pre-registration. Is there anything I could do to maximize the usefulness of the mixed models approach besides model selection via BIC for the addition of random slopes (in the linked article you write that the worst mistake is to routinely do a random intercept model without adding slopes)?

Another question would be about reporting of the mixed models. Can you recommend any paper that uses a good approach? From what I’ve seen there is a lot of differences in reporting of the models, especially for the longitudinal case. Would you recommend a table like this table 2 in this article?

There is a huge mixed models literature, much of it in the two pharmaceutical statistical journals.

If you have to stick with mixed models you can at least add an AR(1) correlation pattern inside the mixed model. It’s likely to make the random effects smaller and add stability to estimation.

1 Like

Thanks alot! When fitting the models, another question arose: If you use the baseline-outcome as a covariate, should only random slopes (but not intercepts) be used? Imagine a case with 5 time points, the first of which is baseline. If I use the 4 following time points (t2-t5) as outcome, and adjust for baseline-outcome (t1) as a covariate, there would be no need for a random intercept in my opinion. Am I right about this? Or would it be a theoretically better model fit, as the new intercept of my outcome (t2) also varies among individuals?

I have no experience with this but in simpler settings removal of an intercept means that you have perfectly already centered X and Y, which is a pretty dangerous assumption.

I was hoping to use the rms package to fit a model with generalized least squares and penalized MLE, however, the data were collected in a hierarchical structure: think children nested in classrooms nested in schools (and not longitudinal). Do I need to use a mixed model? I was hoping to use PMLE. Can you please point me in the right direction, including an R package if I can’t use rms? Thank you.

The standard R packages will do that, with lots of help on stats.stackexchange.com and stackoverflow.com tag “r”.

Thanks, so I rather stick with the intercept+slope model. Reading this blog post, I’d be interested in your opinion on specifying the random effects structure versus the residual matrix, as the author suggests that including a random slope of time “induces a within-individual correlation matrix with correlations that decrease in magnitude the further apart the measurements are on the same person—a common feature of longitudinal data”. I ask this because in lme4, the package I’m using, there is no (easy?) way of specifying an AR(1) correlation pattern, so if the job is done with a random slope of time I’d be better off.

1 Like

Since the correlation structure induced by a random intercept+slope model is weird and since you are having to model lots of parameters, I would not favor that approach.

1 Like

Thanks a lot, Frank. This is really helpful. One last question in this thread that I stumbled on:

Is there any recommendation you’d have on multilevel modelling with only two timepoints in RCTs (baseline, post measurement) where I am interested in the post measurement difference between groups? Several experts I spoke with recommended to use multilevel modelling only with three + timepoints, and so does this paper.

Most of my outcomes are measured at three or four timepoints, but a few outcomes are also measured only at two timepoints (basically one after baseline). For the latter, I would use an ANCOVA with the post measurement as outcome and adjusting for baseline (outcome heterogeneity within groups). However, my outcome has quite a lot missing data. I did not use MI for my multilevel models, as they use (FI)ML, but what’s the best approach for the outcome variables measured only once after baseline:

  1. use multilevel models without imputation
  2. use ancova with imputation
  3. use ancova without imputation?

Would be very grateful for any recommendations, as I’m struggling with this for quite a while.

ANCOVA is the only choice needed for that situation. But there is no way to deal with missing outcomes when outcomes get only one chance to be measured. I would seriously question proceeding with analysis unless you can guarantee that the missing resulted from administrative decisions or accidents.

Thanks a lot! Just to make sure I understand you correctly: Proceed with ANCOVA but only use complete cases; do not impute data? Or only do ANCOVA if outcomes are MCAR and otherwise impute and use imputed data?

If you can make a strong argument from study design that missings are MCAR then you can analyze the complete cases. Otherwise the study should probably be abandoned because it’s not very beneficial to impute outcome data.

1 Like