It’s a shibboleth of classical study design that all analyses should be pre-specified and strictly adhered to. However that can lead to less than ideal decision-making, like the case of carvedilol’s use for heart failure, due to the FDA’s two-positive trial requirement. The pre-specified endpoint was an exercise outcome that did not obtain a statistically significant difference between treatment arms. However, most of the other outcomes, including mortatlity, did show a beneficial treatment effect of carvedilol that was statistically significant. The first time the FDA advisory committee evaluated the trial findings, they did not approve it because there weren’t two trials with statistically signficant findings on the pre-specified primary outcome. On subsequent review, a different advisory committee took into account the beneficial treatment effect on the secondary outcome of mortality, as well as the prior information that drugs in the same class as carvedilol had been previously found to be safe and efficacious for heart failure, and approved carvedilol.
Setting aside the issues of NHST in the FDA approval process, what are your favorite case studies of pre-specified analyses going wrong? Either a pre-specified analysis that led to poor decisions or an exploratory analysis that found something that would have been overlooked if only the pre-specified analysis had been conducted.