On page 3 of your manuscript, you wrote the following:
No one can deny the fact that meta-analyses are the highest level of evidence in evidence-based medicine…
This very point has been disputed in a number of threads. The philosophical literature is just too large to list.
The primary fallacy is the notion of a single meta-analysis as being definitive. For any set of n reports, there are 2^n possible meta-analyses, if each one is given a weight of 1 for include, or 0 for ignore. Any approach that wants to use continuous weights in the range [0-1] increases the number of possible analyses to R^n \rightarrow 2^{\aleph_0} ie. the cardinality of the continuum, which is uncountable. This is exaggerating the possible range of justifiable perspectives, but illustrates that very special circumstances are required to make any synthesis definitive.
This has lead some to apply the jackknife resampling method to study the variability of a meta-analysis.
Gee, T. (2005). Capturing study influence: the concept of ‘gravity’in meta-analysis. Aust Couns Res J, 1, 52-75. (PDF)
Gene Glass (who you mention) later had this to say on meta-analysis (in 2000):
Meta-analysis needs to be replaced by archives of raw data that permit the construction of complex data landscapes that depict the relationships among independent, dependent and mediating variables. We wish to be able to answer the question, “What is the response of males ages 5-8 to ritalin at these dosage levels on attention, acting out and academic achievement after one, three, six and twelve months of treatment?” … We can move toward this vision of useful synthesized archives of research now if we simply re-orient our ideas about what we are doing when we do research. We are not testing grand theories, rather we are charting dosage-response curves for technological interventions under a variety of circumstances. We are not informing colleagues that our straw-person null hypothesis has been rejected at the .01 level, rather we are sharing data collected and reported according to some commonly accepted protocols. We aren’t publishing “studies,” rather we are contributing to data archives.
Nelder made a brief comment on ‘meta-analysis’ in 1986 that is worth mentioning (p. 113):
Recently the term ‘meta-analysis’ has been introduced (Glass et al 1981, Hedges and Olkin 1985) to describe the combination of information from many studies. The use of this … rather pretentious term for a basic activity of science is a clear indication of how far some statisticians’ views of statistics have diverged from the basic procedures of science."
Nelder, J. A. (1986). Statistics, Science and Technology. Journal of the Royal Statistical Society. Series A (General), 149(2), 109–121. https://doi.org/10.2307/2981525
I have always been annoyed when the term “evidence” is dogmatically thrown around by professionals without any particular expertise in statistics, mathematics, or logic, when actual experts are much more nuanced.
Michael Evans wrote the following as the first sentence in the preface of his 2015 book Measuring Statistical Evidence Using Relative Belief
The concept of statistical evidence is somewhat elusive.
Richard Royall stated similar opinions in his 1997 text Statistical Evidence: a Likelihood Paradigm
…Standard statistical methods regularly lead to the misinterpretation of scientific studies. The errors are usually quantitative, when the evidence is judged to be stronger (or weaker) than it really is. But sometimes they are qualitative – sometimes one hypothesis is judged to be supported over another when the opposite is true. These misinterpretations are not a consequence of scientists misusing statistics. They reflect instead a critical defect in current theories of statistics.
Research waste is merely a consequence of ignoring decision theory as fundamental to scientific inference.