I can see how important patient/treatment qualitative interactions could be missed as a result of poor RCT design (e.g., inappropriate “lumping” of patients in disparate clinical states into a single trial). Failure to do adequate preparatory study to optimize disease definition, trial inclusion criteria, and measurement tools would be analogous to a drug company skipping preclinical or early phase clinical studies and jumping to phase III- the chance of success would be very low (see below).
I’m not sure whether this problem (which seems much more prevalent in certain medical specialties than others) could be described as suboptimal “construct validity”(?) Whatever it’s called, we’ve discussed how it could lead to noisy trial results, with a net benefit in some subgroups plausibly being obscured by net harm experienced by other subgroups (yielding an overall neutral trial result). Having said this though, I suspect that poor construct validity probably isn’t the “rate-limiting” step in the effort to discover efficacious new therapies in most disease areas. The fact is that it’s really hard to discover new treatments, even for stakeholders with every possible resource at their disposal- the pharmaceutical industry:
“Drug discovery and development is a long, costly, and high-risk process that takes over 10–15 years with an average cost of over $1–2 billion for each new drug to be approved for clinical use1. For any pharmaceutical company or academic institution, it is a big achievement to advance a drug candidate to phase I clinical trial after drug candidates are rigorously optimized at preclinical stage. However, nine out of ten drug candidates after they have entered clinical studies would fail during phase I, II, III clinical trials and drug approval2,3. It is also worth noting that the 90% failure rate is for the drug candidates that are already advanced to phase I clinical trial, which does not include the drug candidates in the preclinical stages. If drug candidates in the preclinical stage are also counted, the failure rate of drug discovery/development is even higher than 90%.”
Everybody knows how rigorous the drug development process is. Pharmaceutical companies expend colossal effort trying to optimize drug dose and trial inclusion criteria, in order to tease out the intrinsic efficacy of a new molecule, if it’s present. Since financial stakes are very high, every effort is made to minimize “noise” in trial results that could obscure an efficacy signal. And yet, even these maximally-financially-incentivized stakeholders have abysmal success rates for bringing new drugs to market. So viewing the situation in this light, maybe it’s not so surprising that researchers who are NOT affiliated with pharmaceutical companies (and therefore have fewer resources at their disposal) and who are often testing complex, nonspecific interventions (e.g., “sepsis bundles”, perioperative pulse oximetry) for heterogeneous/poorly-defined conditions, rather than intensively-targeted new molecules directed at highly-specific biologic pathways for homogeneous/well-defined conditions, rarely meet with success…
Discovering efficacious new treatments is very hard in medicine, across the board, even under “optimal” testing conditions. Since success is infrequent even in the noise-minimizing conditions created by pharmaceutical companies testing new molecules, should we really be surprised that success rates are near zero in fields where noise is rampant? “Insanity is doing the same thing over and over” and all that…