The big advantage I have yet to see explicitly stated is that reporting variables as inputs to regression models makes it possible to meaningfully aggregate research studies from different reports, even if it is just a team or lab synthesizing their own work. This is more consistent with a mathematically rigorous, information theoretic view of experimental design.
If we had tables (similar to those in Frank’s post) that reported the info added in by each variable in each study, it would be possible to see which variables are relatively consistent across studies, and which might be worth following up on if the reports seem to disagree.
It would also help the particular research community come to a consensus on the most informative study design protocol over time.
Gene Glass, a pioneer in the application of meta-analysis in psycholoy (that later spread to medicine like a virus), had the following to say about its fruitful application in a more modern context. Keep in mind this was written way back in 2000: Meta-Analysis at 25
Blockquote
Meta-analysis needs to be replaced by archives of raw data that permit the construction of complex data landscapes that depict the relationships among independent, dependent and mediating variables. We wish to be able to answer the question, “What is the response of males ages 5-8 to ritalin at these dosage levels on attention, acting out and academic achievement after one, three, six and twelve months of treatment?” … We can move toward this vision of useful synthesized archives of research now if we simply re-orient our ideas about what we are doing when we do research. We are not testing grand theories, rather we are charting dosage-response curves for technological interventions under a variety of circumstances. We are not informing colleagues that our straw-person null hypothesis has been rejected at the .01 level, rather we are sharing data collected and reported according to some commonly accepted protocols. We aren’t publishing “studies,” rather we are contributing to data archives.
Donald Rubin wrote the following on meta-analysis in 1992(!):
Rubin, D. B. (1992). Meta-Analysis: Literature Synthesis or Effect-Size Surface Estimation? Journal of Educational Statistics, 17(4), 363–374. https://doi.org/10.3102/10769986017004363
Blockquote
I am less happy, however, with more esoteric statistical techniques and their implied objects of estimation (i.e., their estimands) which are tied to the conceptualization of average effect sizes, weighted or otherwise, in a population of studies. In contrast to these average effect sizes of literature synthesis, I believe that the proper estimand is an effect-size surface, which is a function only of scientifically relevant factors, and which can only be estimated by extrapolating a response surface of observed effect sizes to a region of ideal studies.
Information fusion is not a specialized research technique, but a critical tool for every working scientist.
The inconsistent hodge-podge of Neyman-Pearson decision theory with Fisher’s information synthesis
has lead to a scientific literature that has more noise than signal on most topics due to the way results are analyzed and reported.