Quality of life surveys and proportional odds models

ordinal
functional-status
#1

Hello all,

I was asked by a colleague to examine a data set that looked at pre-operative frailty as a predictor of post-operative quality of life, but I am struggling to identify a suitable model due to the nature of the input variables.

The outcome is a RAND 36-item health survey questionnaire which collapses down to eight categories. For example; one category, “physical functioning”, is the mean of 10 individual questions, set against a three point scale; “pain”, is the mean of two questions, set against a five point scale. All said and done, you are left with a score for each category that can take defined values between 0 and 100.

Predictors of these values are frailty (measured as a the proportion of 11 defined co-morbidities that a patient presents with), sex and age. Patients are classified as frail when they have three or more conditions (Frailty index = 3/11, i.e. > 0.27)

The publications I’ve seen using similar data appear not to address the underlying data or justify the model selected. My own path has arrived at using a proportional odds ordinal logistic model (rms::orm) where I treat the outcome as continuous and frailty as ordinal.

fit <- orm(physical functioning ~ scored(frailty) + rcs(age, 4) + sex, data)

My question is, am I approaching this correctly? Any thoughts, suggestions or direction when facing this type of data would be greatly welcomed.

3 Likes

#2

I think that the proportional odds model is a really good choice here. But IMHO you are not sufficiently questioning two of the premises of the research. I am convinced that neither premise is correct.

  • Some of the components of the frailty index may have been dichotomized versions of fundamental measures, the the original ordinal or continuous values of these may possibly contain more information about frailty than the entire frailty index
  • The definition of frailty = frailty index \geq 3 was pulled out of thin air and is not justified.

You seem to not be using those premises, thank goodness, but I suggest you don’t let them go by without criticism.

3 Likes

#3

Good questions by the OP. I’d like some input on this as well as this problem of information loss also impacts formal approaches research synthesis.

On the frailty index: I assume only the dichotomized scores are available. Is there some way of incorporating what little information remains in the reported data in a principled way?

Also, related to the question by the OP – how do you feel about constructing theses scores by averaging inherently ordinal variables? What metric (aside from averages of ordinal items) could be proposed?

@f2harrell: do you have any ideas on how this information loss would relate to the evidence metrics described by Jeff Blume? I haven’t worked out all of the math yet, but it seems like it would amount discounted sample size adjustment based on ARE (asymptotic relative efficiency).

AFAICT, the problem of “dichotomania” would affect power. I was considering some sort of metric that takes the ratio of \frac { 1 - \beta} {\alpha} looking for errors that would increase \beta or \alpha, and then discount accordingly. The closer some study, test, etc \frac { 1 - \beta} {\alpha} approaches 1, the less valuable as an evidence measure it is.

My inspiration is the following: E. Lehmann Some Principles of the Theory of Testing Hypotheses

I understand from BBR that dichotomization is terrible for power. If I am doing the math right, a study originally powered at 80%, but then dichotomizes, can throw away as much as 80% of their pre-study power, leaving only 16% actual power. 0.8*(1 - 0.8)

0 Likes

#4

Besides just being terrible for power, my even bigger peeve about dichotomizing continuous outcomes

I am currently working on a trial design where the primary outcome variable will be left ventricular ejection fraction (LVEF) measured 6 months after initiation of treatment. I have successfully lobbied the investigator to use LVEF as a continuous variable, thank goodness (ANCOVA with 6-month as the endpoint and covariate adjustment for baseline variable), although many of his collaborators were pushing for something like a dichotomized “LVEF recovery” (defined as >=10% increase from the baseline LVEF). If we do this, a patient who had a 5% increase is treated the same as a patient with zero improvement, while a patient that had a 10% increase is treated the same as a patient that had a 20% increase. If one treatment improved every single patient by exactly 5% while an alternative treatment improved half of patients by 10% and WORSENED half of patients by 10% (so mean improvement is zero), a dichotomized version of the outcome variable (“improvement” = 10% increase) would suggest that the latter treatment is superior even though the ‘average’ benefit of the former is better (5% improvement versus zero).

Not only do you lose power, you can actually get completely nonsense results like this.

6 Likes

#5

The formula isn’t so easy but that’s in the right direction. The efficiency loss is first thought of as variance ratios, which become sample size ratios in effect. And I don’t think of dichotomization hurting just one style of analysis such as second generation p-values. It affects all methods.

1 Like

#6

Thanks for the comments. It’s reassuring to know I’m on the right track with my analysis choices (whatever about the analysis question).

@f2harrell: I only use frail/not frail for initial visualisation to get a sense of the data distributions and I intend on using the nice example offered by @ADAlthousePhD when I next meet my colleague.

As indicated in my original post, the literature appears to be awash with dubious reports of pre-operative measures predicting post-operative QoL. Early on in this project, I’m pretty sure I saw a paper where the authors reversed the outcome and the predictor in the model. I don’t have a lot of miles behind me with respect to medical studies, but the prevalence of dichotomising data in just the projects that have crossed my desk is alarming.

0 Likes

#7

The initial look that assumes the patient falls off a cliff at the threshold for ‘frail’ will be misleading.

0 Likes

#8

I tried looking up your reference for this; sadly it appears that the 2009 paper is not available online.

I was able to find someone who addressed this prior:

He describes the problems both Andrew and you describe, but it seems depends on the actual effect size. I will have to study this more closely, maybe do some R simulations.

In Jeff Blume’s framework, the evidential value of a paper (that rejects) is \frac {1 - \beta_{\alpha}}{\alpha}, both \beta and a get increased by this practice, driving the likelihood ratio towards 1, possibly making the data summary worthless.

Worth reading for anyone who does research synthesis and is concerned about the impact of the errors BBR describes:

2 Likes

#9

Here is the paper

2 Likes

#10

Of course loss of power is only one of several problems associated with artificial grouping. For example, under some circumstances a cut can also introduce a spurious association.

Bivariate median splits and spurious statistical significance. SE Maxwell , HD Delaney . Psychological bulletin 113 (1), 181, 1993.

4 Likes

#11

Thanks for the reference; looks like the author put it up on researchgate

In relation to meta-analysis, Hunter and Schmidt wrote this article on potential for correction

I came across this critique of the use of logistic regression. I don’t agree with the thesis, but the reference section is good.

From reading the various papers in this thread the big loss of information by categorizing continuous variables is by treating items that have order as equivalent. An ordinal analysis (ie. converting interval or ratio scale data to ranks) loses distance among items, but still maintains order among observations. That doesn’t lose nearly as much information, and can have some robustness benefits.

0 Likes

#12

There’s lots I disagree with in that paper. It also forgot the fact that with a proportional odds model you can get an odds ratio without losing a significant amount of information.

0 Likes

#13

Re attenuation of the correlation due to dichotomizing, Peters and van Voorhis published a correction factor in 1940 in their book Statistical Procedures and their Mathematical Bases (NY: McGraw-Hill). It was apparently soon forgotten.

Gary McLelland has a nice little slider demo of this.
http://psych.colorado.edu/~mcclella/MedianSplit/

EDIT: There were originally two different demos, but Gary has apparently consolidated them.

1 Like