RCT ceiling effects and high drop off rate

Hi there,

I’m trying to analyse a RCT. It’s commercially sensitive so I’ll have to talk abstractly I’m afraid. My outcomes are two psychometric scores (0- 100) but I’ll just talk about one, call it Y. Y is measured before the intervention Y_1 and after the intervention Y_2. We expect changes to be small (\Delta Y = 5-10)

The research question is ‘does the intervention (C) affect the increase in Y relative to the placebo?’

There are two problems I’m having.

Problem 1: floor effects

There are floor effects (i.e. large density of observations at 0) which I suspect is due to censoring - the measurement scale is insufficient to capture the true variation in the underlying construct being measured.

Obviously a linear model will not suffice. I’ve tried a Tobit model

Y_2 \sim Y_1 + X + C

(X = other covariates, C is placebo/treatment) but that only deals with censored outcomes I need to at least adjust for Y_1 which is itself censored. The residuals are also very highly associated with Y_1. For similar reasons I suspect quantil regression is out also.

The only other way I can see is to impute censored values of Y_1 and then use that in Y_2 \sim Y_1 ...

Question: Do you have any suggestions beyond imputing Y_1?

Problem 2: High drop off

the study was done online with a target sample number of 4000. The recruiter kept recruiting until that number was met, but a large number (2000) partially completed the study. So 6000 were recruited but only 4000 completed. Reasons for drop off aren’t clear but definitely not random.

As there were so many partial cases then imputing the outcomes is not feasible.

My question is: do you think this recruitment procedure biases the results and how can I contextualize the results? My thoughts are yes, complete cases are no longer random sample and will heavily influence to result. Just reporting the drop off in a flow chart seems disingenuous given the size of the drop off.

Thanks in advance.

2 Likes

The number of partial cases makes it more in need of multiple imputation, not less. Just dropping incomplete cases will lead to a major bias.

To you modeling question, if you think of this in terms of the fundamental randomized trial question things get much simpler. The fundamental question is do two subject who started at the same point save the treatment assignment end up at the same point. This means that analysis of change scores is inappropriate, and creates impossible to solve floor and ceiling effects. Instead, use a semiparametric ordinal model (e.g. proportional odds model) for the ordinal final raw Y value, adjusted for the baseline value. For full generality allow the baseline value to be modeled nonlinearly.09:50 I talk about this in the BBR course.

2 Likes

i guess there must be some precedent for deriving the score when a subset of components are missing? eg: Imputation of SF-12 Health Scores for Respondents with Partially Missing Data

do they seem to get a certain distance into the survey before bailing out? eg x of y components completed; then compare subset x for completers and non-completers?

Hi folks, thanks very much for your input. This forum really is an amazing community and resource, I’m recommending it to my non-bio/ML colleagues.

@f2harrell - Putting the question like that does make it a lot simpler thanks. The ordinal model was the first thing I thought of (thanks to RMS) but dismissed after thinking about censoring. I’ve tried it and it works really well (at least by looking at predicted quantiles).

Re: imputation, the 2000 drop offs didn’t record an outcome. So I would be imputing 66% values of Y_2. The missingness would be a mixture of:

  1. The higher the value of Y the lower the response rate.
  2. Due to the participants mistakenly believing the study was over.

I’m unsure of the proportion however. As you suggest @PaulBrownPhD I will look at completers vs non-completers, especially regarding their baseline measurements which have been taken which seem to strongly predict the outcome anyway. And thanks for the link.

I was under the impression that the fraction of missing values was of vital importance for MI to work but apparently not!

https://www.sciencedirect.com/science/article/pii/S0895435618308710

1 Like

The fraction of missing values is crucial. But it works in the opposite direction from what most people think. The more missings the more you need to formally handle missings, in the majority of cases. Full Bayesian modeling would work a bit better than multiple imputation.

Another good reference is this.

1 Like

Thanks for your help and the references - I took your recommendations and used MI use PMM with the MICE package (I needed a bit more flexibility than Hmisc was able to provide).

The RMS package worked really well for this as well. Great stuff.

What flexibility did you need? I may be able to expand aregImpute in the future.