Is it sensible to consider meta-analysis the strongest level of evidence for or against an intervention?


#1

Some experts place meta-analysis of RCTs atop a theoretical ‘hierarchy of evidence,’ and I believe others instead place the best-designed RCT at the top position.

When might each of these ideas be reasonable, if this is indeed a meaningful way to think about things at all?


#2

i don’t think so. The meta-analysis will be based on smaller (possibly poorer quality), older (possibly no longer entirely relevant) trials. There may also be additional biases which can creep in (such as selection bias). Combining the trials won’t eliminate bias; if the same systematic biases exist in each trial combining them will only reinforce the bias. Shapiro said: "the quality of the information yielded by [meta-analysis] cannot transcend the quality of the individual studies.”[ref] But there is some bad logic out there eg some have claimed that meta-analysis can be used to indicate whether further trials are warranted; and some have argued for larger trials to improve-meta-analysis! I don’t know if these opinions still exist though

edit: i wonder if anyone has written about “when to perform a meta-analysis”, because meta-analyses of two small studies are appearing (to generate publications perhaps) and statisticians are developing methodology for this


#3

Thank you for this thoughtful reply. I have also seen many meta-analyses lately, and your point about defining when they are useful is an excellent one. Would love to see that explored, and that question really underlies my question above. Thanks!


#4

Remember, Meta-analysis, when done correctly, is just a synthesis of current evidence as PaulBrown describes. It should state the current environment of the issue or intervention, changes in the field and gaps in knowledge. The Campbell and Cochran collaboratives have “ongoing” meta-analyses where, for important evidence that clinical societies regularly refer to, they essentially recreate the initial meta-analysis, state any changes in the literature and whether gaps have been addressed and how well.

To get at PaulBrown’s question, if its important about every 3-5 years, even for well established evidence (aspirin and cardiac risk for example).

Also, if the meta-analysis and review don’t follow PRISMA guidelines, don’t waste your time. Meta-analyses has become somewhat of a cottage industry of late and fodder for publication bloat.


#5

I would argue against the concept of a hierarchy of evidence. All types of evidence exist on a continuum with notable overlap between different study types. For instance, a meta-analysis of 3 nearly identical studies is likely more reliable than any one of the studies. However, I would have more faith is recommending treatment upon one large well conducted RCT, than 5 small studies. Perhaps, the concept of a set hierarchy is actually leading us astray?


#6

i think the “hierarchy of evidence” is essential for pushing back against this new cynicism re rct’s: https://www.acsh.org/news/2018/05/21/are-most-clinical-trials-unethical-12987

Otherwise we have people perversely demanding lower quality evidence for the sake of “ethics”

but i completely agree re the value of prospective meta-analysis where studies are designed with the intention of later combining them


#7

As alluded, this has become a bit of a cottage industry for people looking to score publications.

Some journals also like to publish meta-analyses because they know they’ll be cited, goosing their impact factor.

I’m not aware of any publications on “when to do a meta-analysis” but I have seen some published meta-analyses that were clearly ridiculous. It’s not just about the number of trials, IMO. Once I saw a meta-analysis of clinical outcomes in Phase 2 clinical trials of PCSK9 inhibitors that included, I think, 24 trials. Something like 19 of the 24 trials had zero events - of course they did, the trials were all short-term dose exploration studies, mostly following patients for 6-12 weeks! None of them were designed to follow patients long enough to accumulate clinical events!


#8

Great points, Andrew!


#9

Although it is often labelled as being the best form of evidence, and in principle it should be, I have come to doubt it, There are just too many ways to select what goes into the meta-analysis, and what’s excluded. And in many fields there are lot of small trials that show marginal improvements for some intervention: after meta-analysis the result can look quite certain, when all we are looking at is false positives and publication bias.


#10

The evolution of my thinking has been similar. Thank you for these comments.


#11

FWIW, I teach meta-analysis to our interns, so you may see my influence on the wards. To me, meta-analysis is a type of review – it summarizes what evidence exists. How many studies? How large? How good are the methods? But FWIW I definitely disagree with the standard hierarchy here. Poor trials have bias, averaging biased with unbiased studies puts tight confidence intervals around partially-biased results. It’s been shown a lot that simple, common flaws like unclear randomization procedures and uncertain blinding quality are associated with flawed results. Add in publication bias, etc, and I find the summary measure pretty iffy. If your point is the absolute result, I like what the IPDMA Collaboratives do, by just summarizing the bigger, better trials. Ioannidis did some work on this a long time ago, but I believe the data-based verification was uncertain. https://www.ncbi.nlm.nih.gov/pubmed/8861993