Our team just completed a RCT of an emergency therapy with a high mortality rate in which we randomized 1/3 of eligible patients (just over 300). Another 1/3 of the eligible patients consented to an observational “arm” in which patients got the same 2 therapies under study and the same baseline and outcome data measurement. 1/3 didn’t get into either group (too sick, too well, provider or patient refusal,
In the RCT, we found a significant interaction between treatment group and a diagnostic subgroup. There are only 2 diagnostic possibilities in the population. One treatment was better for one diagnostic subgroup and the other treatment was better for the other diagnostic group.
We planned to estimate the treatment effect in the nonrandomized group, as a secondary analysis, prior to starting the trial (even though the primary effort was the stand alone RCT), but we thought we would capture most if not all non randomized, eligible patients. With consent required for the observational group and other factors our enrollment went way down. We always planned to report the RCT first and as a stand alone effort.
Should we proceed to investigate the observational group to see if randomized results are generalizable? Perhaps with a propensity score analysis. OR given the very selected nature of our populations, could this be dangerous? If treatment effects in randomized versus observational groups go in different directions, for example, what would we conclude? The observational group has great bias in who got which treatment, but we have extensive baseline data for variables that impact treatment choice and outcome.
Thoughts on whether to investigate this observational group? Or is it potentially too messy and might muddy the waters?