Is it sensible to consider meta-analysis the strongest level of evidence for or against an intervention?

I’ve come across the Ioannidis paper (which I generally agree with), as well as the technique of Trial Sequential Analysis.

This method is not without its critics.

Should Cochrane apply error-adjustment methods when conducting repeated meta-analyses?

I think the arguments in the Cochrane paper have substantial merit. Trial Sequential Analysis adapts Frequentist techniques from sequential clinical trials (where past data affects future data collection) and tries to apply them to meta-analysis.

At least for a retrospective meta-analysis, this seems beside the point, if not entirely wrong. It also drags us back to the idea of hypothesis testing, which every informed researcher wants to get away from.

Here are 2 papers from one of the members of the Cochrane committee on some of the issues with it.

Sadly behind a paywall:

The more I read about trial sequential analysis, the more I had to agree with Dr. Harrell’s post on why he became a Bayesian.

Instead of trial sequential analysis, I’d much rather approach this from a Bayesian Decision Theoretic POV – synthesize whatever evidence is available, generate a range of plausible distributions (from skeptical to optimistic) that are constrained by the data, then decide on whether it is necessary to do a new experiment, or accept the evidence as it stands.

There is more than enough room for both Frequentist and Bayesian philosophies in medical science. But I think the Bayesian decision theoretic POV needs to be explained much more than it has been. There would be no dispute about “evidence hierarchies” if the mathematical results from decision theory were better known.

3 Likes

I really liked that BMJ Perspective. Thank you!

1 Like

Nicely articulated points on issues with any single trial. Do you agree that a single study run in multiple centers mitigates the concerns - a little, somewhat, or substantially?

Would you be able to point to some papers with these results?

There are some good links in this discussion from a philosophical POV:

I collected some references here:

The paper on Theory of Experimenters is likely your best starting point.

For a more pragmatic discussion, the following dissertation is worth close study if your field involves small sample research. With small samples, we need to think hard about the need for efficiency that algorithmic balancing provides, with the need for robustness that randomization provides. The author was advised by Dr. Harrell and Dr. Blume.

Chipman, Jonathan Joseph (2019) Sequential Rematched Randomization and Adaptive Monitoring with the Second-Generation p-Value to increase the efficiency and efficacy of Randomized Clinical Trials (link)

1 Like