Given this example: I’m conducting a meta-analysis on the association between BMI and diabetes. The studies I gather are roughly categorized in these two groups:
Group A: studies where patients with and without diabetes are compared by means of BMI
Group B: studies where patients are splitted in high vs low BMI groups and odds ratio of diabetes is presented as summary results
I imagine the best way would be to do two different meta-analysis in the same work. I’m not sure if converting between effect sizes would be formally correct, for example converting odds ratios in cohen’s d and pooling all the studies together, because they would refer to the same aspect but on an opposite viewpoint.
I wonder if there would be an elegant and correct way to pool all the studies together, what do you think?
You describe a mix of case-control (Group A) and cohort (and cross-sectional) studies (Group B). These answer different questions, and I would support your suggestion of keeping them separate in your analyses.
Can you clarify how the data is collected in the Group B studies? Your brief description suggests to me (perhaps incorrectly) that there is some sort of dichotomization of the BMI, which is problematic for a number of reasons.
From another thread:
Blockquote
Optimal” cutpoints do not replicate over studies. Hollander, Sauerbrei, and Schumacher state that “… the optimal cutpoint approach has disadvantages. One of these is that in almost every study where this method is applied, another cutpoint will emerge. This makes comparisons across studies extremely difficult or even impossible.
Doing a quick search for “BMI + cut point” yielded:
Incorporating a cut point as is done in this paper would now lead to more research on the need to study an interaction between ethnicity and BMI, possibly with the literature suggesting different ethnic groups require different cut points.
This “finding” is most likely an artifact of using a cut point in the first place. All of this makes me less than confident that the studies in group B can be used for anything.
Having said that, if you are willing to trust the reported effects, or do some sort of correction for the dichotomiztion studies that use different effect sizes can still be aggregated by converting among different effect sizes. See this as well as:
To correct for dichotomization, see:
Much of the easily accessible work on how to do a meta-analysis will suggest some sort of standardized mean difference. That is problematic for various reasons.
In your case, it sounds like a correlation as an effect size might be useful, but I think I’d still prefer an odds ratio (if used with caution).
I’ve collected a number of papers on the challenges of meta-analysis in this thread. Knowing what I know now, starting with the Stephen Senn papers that discuss effect sizes, and then follow up with various citations by Sander Greenland would have saved me lots of time.