Outcome dispersion and reduction of outliers as a primary outcome in clinical RCT?

Quite many health interventions such as surgical procedures to improve patient health in chronic conditions are already very effective. This means that superiority of a novel procedure is extremely hard to show as a) there is very little to improve by controlling certain baseline variables and b) there may not even be a room for improvement if average group result is ready 90-95% of the maximum value ie. in quality of life questionnaires. This is especially the case if the rationale of the study is that minimally clinically important difference should be achieved in the group level.

A realistic assumption is that a novel intervention may improve group level outcomes few points if outcome is measured at 0-100 point interval scale. What is even more realistic is that this difference may very well be due to lower number of outliers in the primary outcome, ie. lower number of patients with poor outcome, ie. <50 points in the 0-100 scale where >90 is considered is “healed” of good outcome. Lower proportion of poor outcome may result in few points higher group mean and also smaller dispersion.

I wonder if there is any basis to use this approach when planning for primary outcome in a RCT. A novel approach may not result to MCID but it may very well reduce the dispersion and result to lower number of patients with poor outcome. I have not managed to find clinical studies which would have used this approach. My initial impression is that this approach should involve comparison of variances as a pimary outcome because that utilizes continuous outcome. Alternative option is to compare proportions but this would require arbitrary dichotomization of the outcome variable, ie. <50 points is considered poor. Usually there is no universal definition what is a poor outcome when QoL or other patient reported outcome measures are used. Equivalence approach does not seem feasible since at least for me that has negative connotation regarding the novel intervention (aiming to equivalence but assuming superiority in terms of poor outcomes).

So my question is that has there been studies using heterogeneity of variance as a primary outcome or is there other methodological aspect to consider in a clinical situation which I described (aiming to reduce number of outliers).

What is the (patient-centered) rationale for introducing novelty in such a situation? Presumably, surgeons have some nontrivial understanding (or theory) of why a small fraction of patients experiences an unsatisfactory outcome, and the novel surgical technique is thought to address the relevant factors. Why jettison this understanding at the point of application of statistical method?

Helpful inspiration might be found in the spirit of engineering disciplines where a functioning technology is improved incrementally. This is not done principally through statistical methods (which typically celebrate and feed upon ignorance in the form of randomness) but through methods better able to incorporate substantive theoretical knowledge.

I could elaborate my question. Main idea was that certain post-surgical intervention could reduce the number of unsatisfactory outcomes. So identical surgical procedure but for example a different rehabilitation protocol in order to patient centered outcome. Since overall costs and burden are high in patients with poor outcome it could be investigated if new intervention reduces the likelihood or poor outcome. I think in medicine we need such evidence with statistical methods because the payers are interested if this (new intervention) is truly needed.

1 Like

In my experience, physical therapists can be keenly observant. What theories do the PTs have regarding which patients experience unsatisfactory rehab, and why? Can PTs reliably predict a failing rehab process? Can they do this at baseline? After the first few rehab sessions? Might there be an opportunity to ‘cross over’ to the new rehab protocol once the standard protocol seems to be underperforming?