Interpreting the random-effect solution in a mixed model

I am using a generalized mixed linear model to account for repeated measurement (of counts of things representing performance in multiple matches of different sports teams, but that is not particularly relevant). The random-effect solution for the identity of the subjects (the teams) represents relative magnitudes of the subject means of the dependent variable. The covariance parameter for subject identity is provided as a variance, and I have always interpreted the square root of that covparm as the between-subject standard deviation, after adjustment for everything else in the model. Residual error has been partitioned out of it, so I call it the true between-subject SD, as opposed to the observed between-subject SD, which includes residual error.

Fine (or at least I hope so), but hereā€™s my problem. The SD of the random-effect solution should give another estimate of the pure between-subject SD. I realize it isnā€™t right to start with, because of a degrees-of-freedom issue: if I have n subjects, the SD of the random-effect solution is calculated as if there are n-1 degrees of freedom, but the SD provided by the variance for the subjects is calculated, or at least should be consistent with, the actual degrees of freedom for the variance. An estimate of the degrees of freedom is given by 2*Z^2, where Z is the variance divided by its standard error. I am using Proc GLIMMIX in SAS, by the way, which provides a standard error for the random effects. Well, Iā€™ve done the calculation to correct the SD of the random-effect solution using the degrees of freedom, and the correspondence is not exact, but itā€™s near enough, so letā€™s assume that the mismatch between the SDs is just a degrees-of-freedom issue. With these data the degrees of freedom of the subject variance are smallā€“I am getting values of around 1 or even less, sometimesā€“so the SD of the random-effect solution is a lot less, by a factor of ~10, than the SD given by the square root of the covparm variance. So according to the random-effect solution, there are small differences between teams, but according to the SD of the covparm variance, the differences between teams are 10x greater. I actually want to use the random-effect solution to assess individual teams, but I am reluctant to, because things donā€™t add up. Should I do some kind of correction on the values of the random-effect solution?

Immediately after I posted the above, I realized I should have added another related question. SAS allows negative variance, and it provides a solution even when the variance is negative. I checked that the solution is included in the linear model to give predicted values, the same as when the variance is positive. But now how do I interpret the random-effect solution? Itā€™s like there are negative real differences between the subjects.

Will

1 Like

a fellow sas user, we are a bit rare on here. Not sure what you mean when you say sas allows negative variance. In the ā€˜boundsā€™ statement in nlmixed i would have var>0. if thereā€™s some issue with convergernce then they sometimes suggest a different parameterisation, maybe this is what youve seen? Maybe this helps clarify some things: The median hazard ratio: a useful measure of variance and general contextual effects in multilevel survival analysis with discussion of the GCE?

My question about negative variance is not a convergence or parameterization issue. In Proc Mixed and Proc Glimmix you can state ā€œnoboundā€ to allow negative variance. (Proc Nlmixed doesnā€™t allow it, by the look of the documentation.) Nobound increases the risk of failure to converge, but I can usually get around that by stating initial values of the covparms and/or by relaxing the convergence criteria, then relaxing them even further to check that there is no substantial change in the estimates. Negative variance is pretty-much essential when you have a random effect representing individual responses, because itā€™s the only way to get sensible compatibility (confidence) intervals. And itā€™s pretty obvious that sampling variation can result in negative variance, when the sample size and/or true variance is small. Iā€™ve used simulation to check that the intervals include true values at the chosen level of the interval.

but itā€™s spurious and definitely not ā€œessentialā€. if the var is known to be close to 0 then lose the rand effect

Itā€™s not really a good idea to remove a random effect from a model, if itā€™s close to zero. You should be looking at the compatibility (confidence) limits of the variance, to see how big or small it could be, in the same way that you interpret the limits for fixed effects. In any case, what is ā€œcloseā€? And as I said, you need to allow negative variance to get an unbiased estimate with accurate limits. That is, the 90% or whatever interval has to cover the true value 90% or whatever of the time. It does, if you allow negative variance. It doesnā€™t, if you donā€™t, and the estimate is biased high. Iā€™ve shown this repeatedly with simulations.

1 Like

I posted this problem also on the SAS Analytics community. So far one answer and my reply, which includes a simulation to compare mixed models with positive and negative variance. Hereā€™s the link.

2 Likes

what do you assume in the simulations regarding the team effects, because there must be a discrepancy between this and your model

The simulation uses change scores for the analysis, so there is no random effect for (the intercept of) subject identity. There is a random effect for the interaction of the dummy variable with subject identity (random xVarExpt/subject=SubjectID, which can also be specified as random xVarExpt*SubjectID). The analysis can also be done with original scores rather than change scores, in which case there would need to be a random intercept for subject identity, but I wanted to make the simplest possible random-effect structure to clarify the problem. Also, itā€™s off the topic, but modeling of change scores in a controlled trial allows easy inclusion of a covariate for baseline scores to estimate the modifying effect of baseline scores. If you use original scores, the modifying effect of baseline has to be estimated via a more complicated unstructured covariance matrix.

My understanding is that the need for random effects, if one is not modeling serial correlation directly, comes from the number of follow-up measurements,which is independent of whether you subject the baseline.

Thanks, Frank.Not quite sure what you are referring to. Can you clarify. Also, do you have any suggestion for my original problem, which is how to interpret the random-effect solution? Hereā€™s an edited version of how I summarized the dilemma on the SAS Analytics/Stats Procs community:

I am still undecided about whether I should interpret the values of the random-effect solution as is, or whether I should inflate the values so that their simple SD squared is the same as the covparm variance. If I interpret the magnitude of the SD given by the square root of the covparm variance, I get a value with an expected value that is correct: the SD I have used to generate the individual responses in the simulation, and itā€™s substantial, if I make it large enough. But, if that SD has only a few degrees of freedom, then the SD of the random-effect solution will be much smaller, and it could be trivial. So from the random-effect solution, I get the impression that the individual values of the individual responses are trivial, yet their SD via the covparm is substantial. It seems I have no choice but to inflate the values of the random-effect solution so that their SD is the same as that given by the covparm.

I was referring only to the cluster sizes (# observations per subject). The only way I can understand the relevance of the change score comment is if the alternative were to be inclusion of the baseline measurement as a first follow-up measurement, which brings a number of problems.

Thanks for clarifying. Yes, if you are modeling actual scores and not change scores, you canā€™t include values of one of the trials as a covariate. If you want to keep the baseline as a covariate and you have a few trials, itā€™s easy to model changes from baseline, include baseline as a covariate, and allow for correlations between the change scores with appropriate dummy variables and an unstructured covariance matrix. If you have many trials, and the data represent monitoring along with any embedded intervention, you have to model original scores rather than change scores. An issue then is the potential for a modifying effect of the subject mean values (the random effect for subject) rather than any particular baseline or other trial; for example, there could be individual differences in a linear trend across the trials, so you would have (in the language of SAS) random int Time/subject=SubjectID type=un;. The structure of the residuals or inclusion of other within-subject random effects would need some thought.

It looks like I am not going to get any further help on my problem of whether I rescale the random-effect solution to increase the SD of the solution to make it the same as the square root of the corresponding covparm, so Iā€™ve done some more simulations to try to figure it out. I found that the compatibility intervals for the individual responses provided by the random-effect solution appear to be accurate; that is, with 90% intervals, on average the intervals contain the sample true value of the individual response 90% of the time, even though the SD of the random-effect solution is way less than the square root of the covparm (which happens when there are only a few degrees of freedom for the covparm). So although the individual estimated values are shrunk, the compatibility intervals capture the true values correctly. I therefore think itā€™s safe to rescale the random-effect solution and the compatibility limits. It amounts to a correction for shrinkage.

Iā€™m not clear on that. When one of the values is a pre-treatment baseline measurement it is standard to include it as a covariate and to make the first post-time-zero measurement be the first of the multivariate responses.

Oops, sorry, yes, I forgot that approach to adjusting for baseline with repeated measurement.

In case anyone is still interested in this topic, I have resolved the problem of the variance of the random-effect solution (as given by ods output solutionr= in Proc Mixed) being somewhat (and sometimes a lot) less than the variance given by its corresponding covparm (as given by ods output covparms=). I have posted a full explanation backed up by another simulation in the SAS Statistical Procedures community notice board. See this link. In summary, the individual solution values for a random effect all have standard errors, and it turns out that the variance of the solution plus the mean standard error squared is equal to the variance given by the covparms. So, when I use a random-effect solution as a linear predictor in another model, its effect will be attenuated by the standard error, and the factor to correct the attenuation is simply (covparm variance)/(variance of the solution).

Will

1 Like

what do you assume in the simulations regarding the team effects?

Sorry for the delay in replyingā€¦ other urgent stuff before xmas. Not quite sure what you are asking. The differences between teams and the changes within teams were normally distributed. In these simulations there are no relationships with other variables, but I have done further simulations to check what happens when I have another variable that tracks the change scores. The simulations are for real data, where the sport scientists at Olympiatoppen are monitoring changes in biomechanical measures in one kind of test (on a dynamometer) in their athletes, and they want to know the extent to which the changes in those measures tests track changes in the more usual fitness or performance tests (jump height, sprint speed and so on). As I noted previously, the random-effect solutions provide measures of change scores, and their SD were smaller (sometimes by quite a lot) than the SD coming from the covparms. But surprisingly, when I used the random-effect solutions as predictors in a model with the performance test measure as the dependent, I got unbiased estimates of the relationships that I had set up in the data between the biomech and performance measures. So I was wrong about needing to correct for attenuation.

But itā€™s all very complicated. I am going to do some simpler analyses for the guys at Olympiatoppen, where we just see what kind of relationships we get between consecutive change scores of the biomech and performance measures, because most of the time theyā€™re just interested in changes since the last test.