Hartung-Knapp method in meta-analysis

I would appreciate it if someone could advise on meta-analsyis approach below.

I’m undertaking a systematic review of the association between incident TB and COPD.

We included 3 cohort papers and as you see in the figure the point estimates are heterogeneous (I-squared=97.04%) but the direction of estimates are consistent and all are statistically significant.

In terms of clinical characteristics, they are somewhat heterogeneous. One study included included only hospital discharged COPD cases and another included only patients with pre-dialysis CKD.

In the primary analysis, I applied the Hartung-Knapp method to account for uncertainty in between-study heterogeneity. This resulted in a wide CI and non-significant result despite individual studies suggesting significant results.

I think this is due to a small number of studies, which leads to a wide uncertainty in between-study heterogeneity. In contrast, the one at the bottom using a conventional method showed a significant result. I’m wondering how to deal with the situation.
I’m inclined to sticking to the HK method and also discussing qualitatively.

Thank you for your advice in advance.

Some quick thoughts:

  1. If you are going to use a frequentist analysis for this, Hartaung Knapp has been reported to have better coverage for small sample sizes compared to the DerSimonian-Laird approach. But you might want to read this:
  1. The problem – you only have 3 studies. So of course the CI is going to be wide.

  2. You will need to find more studies, but I think a meta-regression approach would be more useful to explore the heterogeneity.

Addendum This will be a great start to doing a meta-analysis in a rigorous way, in that it doesn’t naively substitute estimates of components for variance, and treat them as population parameters.

  1. Think carefully about the effect size measure. Maybe using the log odds instead of the hazard ratio will reduce heterogeneity?
  1. Ignore the “significance” of the aggregate CI. If you extracted the one tailed p-values from each of the CI and used a combination procedure, I’m sure it would indicate that there is a positive association before a CI would reject the null.

I put together a thread with a lot of references on statistical issues related to meta-analysis. Suffice it to say – the common textbook procedures will lead you astray if you are not careful.

That would be very nice. I suggest making an online Zotero bibliography. Zotero makes it easy to add citations using a web browser plugin, and easy to export to BibTeX and other formats. I manage my main bibliography as a Zotero group so I can share it: https://www.zotero.org/groups/feh/library .

I suggest concentrating on Bayesian hierarchical models because they provide exact inference in this setting, and are more flexible.

1 Like

The difference is because error estimation is faulty with random effects models and thus the model based variance underestimates the true variance. The Hartung-Knapp method is a crude fix to this problem but the real fix should be avoiding the RE model in meta-analysis and using fixed effect models that also allow for heterogeneity of effects such as this model.

That said, I think a minimum of five studies is needed to ensure nominal coverage even with optimum model selection

That’s enough of a reason to just jump to Bayesian hierarchical models IMHO, and be done with it.

1 Like

Thank you. I now used MetaXL to perform IVhet, which is great.

nice!

If of interest, the metafor package also implements this in r.

It seems what Wolfgang Viechtbauer has described as IVhet in metafor is a random effects model with different weights than the default one so it does not seem to be the correct implementation of the IVhet model.
On the other hand, Stata has an excellent implementation of the IVhet model (metan and admetan) but we always have MetaXL to fall back to.

1 Like

oh interesting thank you