I conducted an (unscientific) poll yesterday on Twitter regarding whether people felt that DSMB members in RCT should be blinded to the treatment assignment. I was surprised to see that the majority of voters said “Blinded” when most clinical trial savvy folks that I know are very strongly of the opinion that the DSMB should be privy to the treatment assignments. I have created a Twitter thread, which I will reference here, but would also like to open for discussion in this forum. Here is my Twitter thread from today:
(THREAD) follow-up on whether DSMB should be blinded to treatment assignments in RCT
I’m a little surprised at how many people voted that DSMB should be blinded. Final results at 390 votes were 63% for keeping DSMB blinded– i.e. just “A/B” in the report vs 37% that treatment arms should be labeled – i.e. “Active/Placebo”
The comments on the poll were mixed – but as the day went on, they included an increasing proportion of incredulous “Wait, why in the world would the DSMB be blinded at the interim analyses?”
I’m with those folks. I do NOT believe that the DSMB should be blinded to treatment assignment as they review results & deliberate whether the trial should continue. I was surprised to see so many votes for a blinded DSMB. Let’s dig into this.
I suspect that a lot of votes for blinded DSMB were derived from the belief that MOAR BLINDING is always better. Like @JasonConnorPhD said here:
Blinding of the recruiters to treatment assignment is important because we don’t want their knowledge of what the next assignment could be to influence who they try to enroll in the trial (i.e. “the next assignment is for the Intervention…this person looks pretty sick…they should get the Intervention”)
Blinding of the participants to treatment assignment is important because we do not want their knowledge of what they are getting to influence their behavior during the trial or any subjective, self-reported outcomes
Blinding of the assessors to treatment assignment is important for similar reasons: we do not want the assessors’ knowledge of what the participant is getting to influence their assessment of outcomes (i.e. probing one group harder to ask about AE’s; reading an imaging study differently)
But none of those apply to the DSMB. They are not involved in participant recruitment or outcome assessment. They cannot introduce bias to the data as it is collected. They are presented interim results and charged with the decision of whether the trial should continue.
So why should they be blinded, then? From that camp, @ErickRScott specifically comments that the DSMB could allow bias to creep into their decision-making if they know which treatment arm is which:
First counterpoint: keeping the DSMB blinded isn’t strictly protective against bias. It just allows a different bias to creep in; a blinded DSMB will (probably) take some guess about which arm is which.
I’d argue that labeling the arms is more likely to be protective against bias (since they know which is which – there is no speculation) than presenting them blinded.
Second counterpoint: trials may generally be stopped for three broad reasons - efficacy, safety, or futility. Perhaps Erick can explain, but I do not think the DSMB can make a properly informed decision about each of these without knowledge of the treatment arms.
EFFICACY: the early evidence is overwhelmingly in favor of the Experimental treatment arm
The bar for efficacy stopping is generally very high, for two reasons: 1) early results can be very noisy, and trials stopped early for efficacy tend to produce overly optimistic estimates of treatment benefit;
- alpha-spending to control the risk of falsely concluding that the treatment is efficacious due to multiple looks at the data. As @JasonConnorPhD reminds us, there is no alpha-spend from the DMC looking at efficacy as long as locked into prespecified efficacy stopping rules.
However, even if DSMB is locked into stopping rules & told that the efficacy threshold has not been met, knowledge of the efficacy results is still important for the DSMB in context of the other possibilities below…
SAFETY: the early evidence is suggestive of potential harm from the Experimental treatment.
Back to what I just said above: if the DSMB is locked into prespecified rules about efficacy stopping, why do they need to know anything more than “the stopping rule has not been met” for efficacy?
Well, because they’re also charged with monitoring safety and weighing the risk/benefit of continuing the trial.
If there is a mild/moderate safety concern (suppose, increased risk of one adverse event) it is relevant for the DSMB to know whether interim results are suggestive of efficacy and to what degree, because that should be taken into account in decision to stop for safety
FUTILITY: often confused, this is different from safety stopping; futility stopping occurs not necessarily because the Experimental treatment is harming patients, but interim data show little benefit and it is unlikely that the trial will conclude that Experimental treatment is beneficial
What it means to stop a trial for “futility” is confusing to a lot of people. One might argue that if there is no safety concern, there’s no “harm” in allowing the trial to carry on to its originally planned sample size, even if the interim results are looking pretty null.
Well, you’re still exposing patients to participation in a research study where the likelihood of benefit is fairly low. I don’t think it’s unethical to keep enrollment open until the originally planned sample size, but YMMV.
It’s also dependent on who is funding the trial – naturally, sponsors will not want to burn additional resources on trials that are now unlikely to end in a positive result; if interim analysis shows low probability of proving treatment benefit, they may prefer to kill the trial there and devote resources to other, more promising treatments.
In summary, the risk/benefit of whether to stop trials is usually not “symmetric” as @JasonConnorPhD puts it.
There is a difference between stopping for efficacy, safety, and futility. The level of evidence required to stop a trial for efficacy is higher than stopping for safety concerns.
The @NEJM article brought up by @thebyrdlab, @DonDonRic and others has a nice passage on this:
“Masked monitoring is thought to increase the objectivity of monitors by making them less prone to bias. What is overlooked is what masking does to degrade the competency of the monitors.”
“The assumption underlying masked monitoring is that recommendations for a change in the study protocol can be made independently of the direction of a treatment difference, but this assumption is false.”
“More evidence is required to stop a trial because of a benefit than because of harm. Trials are performed to assess safety and efficacy, not to “prove” harm…it is unreasonable to make the monitors behave as if they were indifferent to the direction of a treatment difference.”
One of the questions, from @DrSarahMcNab:
There’s your answer, Dr. McNab, at least the best way I can think of to explain it.
Most guidance on DMC’s and DSMB’s is pretty well unanimous on this issue:
One of the early replies, from @numbersman77:
Referenced this publication in Clinical Trials:
“DMCs should have access to unblinded efficacy and safety data throughout the trial to enable informed judgments about risks and benefits”
@MarionKCampbell brings up the DAMOCLES charter, which makes a similar recommendation:
The Clinical Trials Transformation Initiative recommendations for Data Monitoring Committees agrees:
@thebyrdlab also provided this excellent videocast from NIH:
Special note: while the DSMB should not be blinded to the treatment groups at the interim analysis, maintaining confidentiality of all interim results is important (whether the arms are labeled or not)
Dissemination of interim results (even blinded results) can compromise recruitment and/or influence patient and physician behaviors
That’s all I’ve got for now.
I look forward to ongoing discussion in this forum. Perhaps I can even learn a few more things