How do we mistake biological plausibility for clinical relevance?

Research in the field of Critical Care Medicine is increasingly challenging. As an intensivist, I am getting used to seeing helplessly weak hypotheses receiving complicated statistical treatments. The field has focused more on better data analysis than on better hypotheses.

I am studying the pitfalls in translating biological plausibility to clinical relevance, and ultimately estimating the treatment effect in an RCT. Unfortunately, academia has no or little space for such a topic. Hence, I decided to discuss the topic in Substack, in short and provocative posts.

The problem has two parts. The first is the causal chain supporting the RCT causal assumption. Here I depict studies with confusing hypotheses that unsurprisingly yielded “negative” results.

The Land of Irrelevance #1 - The PReVENT Study

The Land of Irrelevance #2 - The MENDS2 Study

The second part is the marginal utility of the hypothesis, considering the clinical scenario, especially considering what I call the Additive Paradigm. In this post, I lay a framework:

How do we mistake biological plausibility for clinical relevance?

One good example is the “Chloride Case”, where I discuss how a fragile hypothesis got tested in large RCTs and even had a meta-analysis. This bad hypothesis received disproportional attention.

Finally, there is an example of how the misuse of statistics will provide a “significant” result for a terrible hypothesis.

Irrelevant Yet Significant: The DEFENDER study (part 1/2)

I think this is a major problem in biomedical research. I would truly appreciate any input from this respected community of biostatisticians. I believe this is a problem people couldn’t see coming. How to solve it?

5 Likes

I’m always glad to see problems in critical care medical research discussed and I hope you get comments from several experts. One minor comment is that hypotheses per se are not as useful as they seem. I’d rather have good questions, or better still, to have physiologic or patient status measurements that are worth measuring and worth doing something about. Estimation is often more important than hypothesis formulation.

1 Like

Can you elaborate? Specifically, can you point to an RCT that showed therapeutic efficacy, where selection of the therapy being tested wasn’t founded in a well-formulated hypothesis about mechanism of disease development?

I can see the importance of asking good questions as a prerequisite for designing an RCT. But it seems like a solid hypothesis is also a prerequisite…

For example:

Questions: “What is the distribution of clinical trajectories among patients with pneumococcal pneumonia who are sick enough to be admitted to the ICU?”; “Among patients who die in ICU following a diagnosis of pneumococcal pneumonia, what are the major proximate causes of death?” Examples might include inability to wean off mechanical ventilation, intractable hemodynamic instability…"

Hypothesis: “Given that X has been identified as the main proximate cause of death among patients with pneumococcal pneumonia, and considering the fact that treatment Y addresses this mechanism, we propose that treatment Y, applied to patients with pneumococcal pneumonia, might reduce the risk of death.”

1 Like

The trial objective could be reframed as follows: The primary objective of this RCT is to estimate the relative risk of treatment Y relative to comparator treatment Z for the treatment of pneumococcal pneumonia. Z could be a placebo, an active control, etc. (Frank and others might prefer odds ratio to relative risk, but that is a peripheral technical point.)

A real life example was the FDA’s criteria for emergency use authorization of the original covid-19 vaccines back in the fall of 2020. The pre-specified efficacy criterion stated in the guidance for industry “Development and Licensure of Vaccines to Prevent COVID-19” was “a point estimate for a placebo-controlled efficacy trial of at least 50%, with a lower bound of the appropriately alpha-adjusted confidence interval around the primary efficacy endpoint point estimate of >30%”. This is an estimation-based, rather than null hypothesis-based, framework for evauating medical products.

This thought is related to John Tukey’s flippant comment (Stat. Sci., 1991): “All we know about the world teaches us that the effects of A and B are always different–in some decimal place–for any A and B. Thus asking ‘Are the effects different?’ is foolish.” I suspect that Frank is advocating for us to change our question to “how different are they, and how precisely do we know this”? Answering this question would provide a richer set of information for decision makers than the mere acceptance or rejection of a point-null hypothesis. This argument generalizes to other kinds of hypotheses, such as equivalence, non-inferiority, etc., which can often be re-cast in an estimation framework.

I hestitate to make blanket statements. There are well known examples in physics where Tukey’s comment is decisively wrong (e.g., neutrino mass).

I couldn’t find the original 2020 FDA document, so the quote is taken from https://www.fda.gov/media/142749/download

The Tukey quote is from https://www.jstor.org/stable/2245714

2 Likes

Thanks. I wasn’t advocating for any particular type of experimental design (e.g., null hypothesis testing). Rather, I was just saying that RCTs that have been able, historically, to identify therapeutic efficacy, have usually been grounded in a solid theory of disease mechanism/causation (e.g., plaque rupture causes STEMI).

In situations where disease mechanism and/or trajectory are poorly understood or highly complex, it becomes very challenging to design an experiment that will be capable of discerning the efficacy of any therapeutic intervention (separating signal from noise). I don’t know anything about economics, but, given the complexity of economies, RCTs in this field probably face similar challenges…

As noted in the original post, critical care research doesn’t usually deal with simple/linear causal pathways from “root” disease to outcome. Rather, it deals with complex webs/chain reactions of potentially life-threatening events that were triggered by the root disease. It’s challenging to figure out where in the web to intervene in order to discernibly impact important outcomes. Asking granular questions about disease mechanisms/trajectory will be important in mapping the web; a lot of important work has likely already been done in this regard.

I think the author of the original post is saying that, once the web is mapped, the “structural” threads will need to be distinguished from the extraneous threads (e.g., by asking “What are the most common proximate causes of death?”) and to focus therapeutic development on those threads (?) And only then, after all this preliminary work, will it be possible to design RCTs that stand any chance of separating therapeutic efficacy signals from noise.

3 Likes

This is a very important question. The issue of marginal utility.

These are the fundamental overlapping problems which have caused “critical care RCT nihilism”.

  1. Marginal utility
  2. Too many Pts going to die regardless making N insufficient.
  3. Legacy lumped Crit Care “Synthetic Syndromes” like ARDS and sepsis containing a mix of different diseases many of which do not have the targeted driver.
2 Likes

As an advocate participant in oncology trials in the National Clinical Trials Network (NCTN), I often wondered if the instinct to be a good colleague held back frank negative feedback on proposed trial concepts.

4 Likes

Simulating studies before running them might help a lot. I’m looking at an $11M trial right now that (judging from the ClinicalTrials-dot-gov entry) probably ought to spend several % of its budget on simulation studies up-front. Isn’t there a @f2harrell quote (adjusted for inflation) along the lines of,

A $110 analysis can make an $11M trial worth $1,100.

I read the Chloride Case with some interest, as a longtime fan of Stewart. But what I’m missing there and in your post above here is the alternative. Can you point to some pressing intensivist questions that do need further investigation in RCTs? Why do these not excite the community?

2 Likes

Thank you for sharing very interesting insights.
I want to ask you about your opinion regarding the use of inotropes as a part of goal-oriented therapy in patients with acute heart failure and cardiogenic shock. More specifically, isn’t the current approach for using inotropes “too liberal”?

Sorry for the late answer I lost sight of the app and I have notifications off.

I think we don’t need more RCTs in critical care medicine.

We need more observational studies to describe the syndromes we are treating. As commented above, once we have causality models as strong as the coronary obstruction model of myocardial infarction we could seek for differential treatment effects. We are wasting money and careers in half-baked hypotheses that serve not to advance knowledge, but to keep the business going.

2 Likes

Thank you. I think there is a confusion here. We intensivists mistake treatment for supportive measures.

Inotropes are mere support measures to keep the patient alive while we treat the cause of the shock. That’s why the effect of inotrope choice is marginal, as well as using an intra-aortic balloon pump or ECMO etc.

I don’t think the indication is too liberal because it is only a supportive measure. It should be used as needed. Provided you keep the patient lucid, warm and urinating any supportive approach will do. The same for ECMO. I’d only use less inotropes in a context of plenty ECMO access.

1 Like

This has been the view of anesthesiologists who commonly operate to deal with hypotension with a range of inotropes, alpha agonists, fluid, and I:E ventilation adustments by herustics and without RCT data.

We are beginning to understand that profound heterogeneity of the drivers, (the target of the treatment) renders broadly applied RCT negative or, if positive, nonreproducuble.

This is especially true when a unknown subset may be harmed.

Other than narrow the set under test, I’m not sure how this problem with RCT for broad conditions can be solved.

It would be great to see more input about this as we face a real dilemma in the profoundly heterogenous environment of critical care where, for an unknown portion, nothing that is done
will save the patient. However, for others, there must be a best treatment definable by RCT if we can figure out how to do that in this environment.

An example of this is the large RCT of different arterial oxygen targets (this is also supportive not targeted treatment). These have been all over the map and presently the latest suggests the target does not matter. Of course it might matter in a subset but who knows. Another massive multicenter study, like the one presently in progress, is not going to provide information useful for the instant patient under care.

We really have to address this. RCT nihlism in the ICU is something that evolved over the past 15 years. We all trusted the critical care RCT prior to that.

The trite saying that “Given a grant, you can do an RCT on a ham sandwich!” would have seemed silly 20yrs ago.

Now, in the ICU, it has proven to be a useful metaphor.

5 Likes

Hi Lawrence

This is a good, short read:

The authors suggest that quality control and consistency of ICU care need to be addressed before there can be any reasonable hope of improving the track record of critical care RCTs:

“…reliable clinical practice and meaningful outcome assessments are also necessary prerequisites to perform thoughtful experiments (RCTs) to determine causality and evaluate the effects of novel interventions.”

Since it has been difficult to show, using RCTs, that intervening on one thread within a complex ICU “web” of care can improve outcomes, then maybe ICU RCTs would do better to focus on testing the efficacy of quality control-oriented packages of supportive care i.e., interventions that target multiple threads in the web (?)

If you reflect on your years of ICU experience, I bet that you could identify some cardinal sins committed routinely by suboptimally-experienced physicians. Maybe some residents on overnight ICU shifts tend to react the wrong way to certain changes in a patient’s ventilatory status, misjudge a patient’s volume status, miss an important clinical sign for shock, fail to appreciate the importance of a worsening lab value, fail to recognize the early signs of an important adverse drug reaction, attempt a new procedure with insufficient oversight (resulting in iatrogenic harm) etc… The impact of any one of these types of mistakes on a patient’s outcome might not be discernible, but, if they were to occur in clusters during the care of a given patient, could plausibly lead to a worse outcome than might otherwise have occurred if some type of preventative quality control algorithm had been in place.

I expect that there are lots of studies looking at quality control within ICUs, but I’m not sure how many published RCTs have examined the efficacy of specific quality control protocols/packages (?)

2 Likes

I think it is a pertinent point. I have been talking the treatment’s effects are marginal in nature, but one should be aware the unintended effects and the complications are not. Their effects are additive. We have learned to reduce mortality by avoiding complications. I think it’s the major evolution in the last 20 years of critical care. It is the only way to move forward in the absence of well-described syndromes.

3 Likes

Thanks for bringing this up Erin.

Yes, I am very familiar with the work and teachings of Ognjen Gajic and I agree with you.

It would be nice to get Ogi to join us in this duscussion. I will send him the link.

2 Likes

I can’t help but think back to my own ICU exposure in the 1990s. Each night, there was a single medical resident (often second year) manning a 25-30 bed ICU, plus taking ICU admissions through the ER, plus responding to pre-arrests on the hospital wards. A critical care fellow was accessible by phone overnight and would come into the hospital if needed. Needless to say, this arrangement was absolutely mortifying and completely inadequate. It was common for the hapless overnight ICU resident to have to manage several cardiac arrests simultaneously. It’s not a stretch to imagine that the overnight care of other ICU patients might have suffered in this environment…

The question of whether overnight in-house availability of intensivists might improve patient outcomes has been asked before. For example, this 2013 NEJM publication tried to assess whether patients who were monitored overnight by an intensivist on their night of admission fared better than those who were cared for by medical residents on their night of admission [note that the ICU in this study had three medical residents each night (not one) overseeing a 24-bed ICU]:

https://www.nejm.org/doi/full/10.1056/NEJMoa1302854

The authors didn’t identify a meaningful improvement in outcomes for the patients who were admitted and monitored on their first night by an intensivist. However, this study didn’t address the very important question of whether consistent night-time intensivist coverage would, over time, improve the outcomes of all ICU patients.

Unless we start training a lot more intensivists in a hurry, proving that ICU patients do better, on the whole, when intensivists are available, in person, 24/7, might just be an exercise in frustration. But, as Stephen Sean noted https://onlinelibrary.wiley.com/doi/full/10.1002/sim.6739:

“One of Deming’s important lessons to managers was that they had to understand the origins of variation in a system in order to be able to intervene effectively to improve its quality.”

Could suboptimal in-person access to intensivists, if pervasive, be such a huge source of care quality variability and therefore statistical “noise” within ICU settings that it effectively dwarfs any signals of efficacy in critical care therapeutic trials (?)

3 Likes

Interesting point. I’ve never thought of how that might affect an RCT (except in the abstract).

Unlike many more straightforward testings, RCT in the ICU with, for example, survival as an endpoint has outside factors which go beyound compliance, for example.

Care of the critically ill is complex and all in the hospital know that survival in critical care is also a direct function on the competence, experience and commitment of the team.

Although no one likes to admit it, in the hospital there are the A physcians and the B-physcians.

I’m, not sure how that could be randomized or rendered reproducible. .

1 Like