Meta regression interpretation, significant coefficient with no association

I’m conducting a meta analysis and i’m interested in assessing the association between two covariates in a clinical scenario.

For example i’m interesting in assessing the prevalence of obesity in the general population and the association between obesity and being male.

I perform a meta regression and i see that the percentage of males included in the study Is a significant covariate with 70% of the heterogeneity explained.
When I meta analyze the odds ratio between being male and being obese i don’t have a statistically significant result.

Does It mean that the results in the meta regression are spurious?
I have a bit of uncertainty in explaining this result.

Thank you!

Assuming your calculations are correct, and you have a reasonable amount of studies (ie. at least 10 for a regression analysis), I don’t see why both procedures should be considered in disagreement.

  1. The studies in your sample were heterogeneous for biological sex
  2. Some other variables need to be accounted for (ie. age, ses, etc.) in your computation of an aggregate effect.

There is a section in BBR on how odds ratios are sensitive to covariates.
From Ch 13:

Blockquote
From seeing this example one can argue that odds ratios, like hazard ratios, were
never really designed to be computed on a set of subjects having heterogeneity in their
expected outcomes

The more rigorous guidance for effect size aggregation in meta-analysis suggest caution in computing effect size estimates with heterogeneous data.

A better use of meta-regression would be to characterize what variables should go into the model for a future study, rather than computing an aggregate effect and acting as if it is “the truth.”

If I’ve made any errors in the above, I’d appreciate the corrective input.

1 Like

A meta-regression with a binary explanatory variable gives the same results as a sub-grouped meta-analysis. Thus if you conduct a meta-analysis of proportions by gender, then the pooled estimate for females should be equal to the intercept on your meta-regression model if a proportion (transformed) was the outcome and gender the explanatory variable. This is probably worth checking before a meta-analysis of prevalence odds ratios is done.

Also model assumptions make a difference and thus a random effects meta-regression should be avoided and just use inverse variance weights and robust error variances for meta-regression and its counterpart in a meta-analytic model is the IVhet model, available through metan in Stata.

2 Likes

Thank you! I found the cause of this “error”, i revised the included data with the person who performed the search and saw that the studies included in the meta-regression were all the included studies, while the ones included in the secondary analysis were only a limited sample. This was the cause of the difference in results, but thank you for reporting the Stata solution.

I will have to add this paper (where you are listed as a co-author) to my reference list on the methodology of meta-analysis. I’m still going through it, but it is interesting.

Al khalif, M, Thalib, Doi, S. (2011) Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses Journal of Clinical Epidimiology
(link)

1 Like

Since then we ran a simulation in Stata to demonstrate this issue
It is here with the code and you can run this in Stata (if you are a Stata user) to see what the issue actually is. The recommendations are a bit radical but seem the only possible conclusion based on the evidence.

Of note one of the models tested uses quality information for bias adjustment. Sander has advised against this in a famous paper but I am not disagreeing with the paper at all. Our point is that we accept that bias cannot be quantified through quality information and what we use is a relative rank to adjust (not quantify) and for that purpose we do not need to know the absolute extent of bias. If there is no ranking information, the model still applies and that is what we call the IVhet model where all ranking input across studies is identical and automatically applied. See this for a comparison of bias adjustment methods in meta-analysis.

1 Like