I’ve come across the Ioannidis paper (which I generally agree with), as well as the technique of Trial Sequential Analysis.

This method is not without its critics.

Should Cochrane apply error-adjustment methods when conducting repeated meta-analyses?

I think the arguments in the Cochrane paper have substantial merit. Trial Sequential Analysis adapts Frequentist techniques from sequential clinical trials (where past data affects future data collection) and tries to apply them to meta-analysis.

At least for a retrospective meta-analysis, this seems beside the point, if not entirely wrong. It also drags us back to the idea of hypothesis testing, which every informed researcher wants to get away from.

Here are 2 papers from one of the members of the Cochrane committee on some of the issues with it.

Sadly behind a paywall:

The more I read about trial sequential analysis, the more I had to agree with Dr. Harrell’s post on why he became a Bayesian.

Instead of trial sequential analysis, I’d much rather approach this from a Bayesian Decision Theoretic POV – synthesize whatever evidence is available, generate a range of plausible distributions (from skeptical to optimistic) that are constrained by the data, then decide on whether it is necessary to do a new experiment, or accept the evidence as it stands.

There is more than enough room for both Frequentist and Bayesian philosophies in medical science. But I think the Bayesian decision theoretic POV needs to be explained much more than it has been. There would be no dispute about “evidence hierarchies” if the mathematical results from decision theory were better known.