# Meta-analysis: Ideal method to calculate the absolute risk reduction?

I want to calculate the absolute risk reduction (aka, risk difference) from a meta-analysis of randomized trials (the outcome is mortality). I have found mixed references on what would be the ideal method to estimate the risk difference (see below).

I would like to hear your thoughts on this topic. Thank you in advance.

### Doi et al. 2020

This paper by @s_doi (already thoroughly discussed in another thread) advocates pooling odds ratios and then deriving the pooled risk difference from the pooled OR:

The risk difference … should be derived from the OR.

### Cochrane

The Cochrane Hanbook, on the other hand, does not clearly favor OR over risk ratio:

Empirical evidence suggests that relative effect measures are, on average, more consistent than absolute measures (Engels et al 2000, Deeks 2002, Rücker et al 2009). For this reason, it is wise to avoid performing meta-analyses of risk differences, unless there is a clear reason to suspect that risk differences will be consistent in a particular clinical situation. On average there is little difference between the odds ratio and risk ratio in terms of consistency (Deeks 2002)

It is generally recommended that meta-analyses are undertaken using risk ratios (taking care to make a sensible choice over which category of outcome is classified as the event) or odds ratios. This is because it seems important to avoid using summary statistics for which there is empirical evidence that they are unlikely to give consistent estimates of intervention effects (the risk difference)

It may be wise to plan to undertake a sensitivity analysis to investigate whether choice of summary statistic (and selection of the event category) is critical to the conclusions of the meta-analysis

It is often sensible to use one statistic for meta-analysis and to re-express the results using a second, more easily interpretable statistic. For example, often meta-analysis may be best performed using relative effect measures (risk ratios or odds ratios) and the results re-expressed using absolute effect measures (risk differences

and, in another chapter

Because risk ratios are easier to interpret than odds ratios, but odds ratios have favourable mathematical properties, a review author may decide to undertake a meta-analysis based on odds ratios, but to express the result as a summary risk ratio

Lastly, the GRADE group also does not favor one effect size over the other in this article.

2 Likes

I commented on similar questions for a colleague a few weeks ago, and came up with a near identical set of resources (including DataMethods link, of course!)

I’ll just add that Chapter 15 of the Cochrane Handbook has a few more details on deriving risk differences from a summary RR derived from a meta-analysis.

See in particular particularly 15.4.3 (issue at hand of choosing effect measures) and 15.4.4.2 for a very simple formula for getting an absolute risk reduction from a RR, with an assumed comparator risk (ACR). In the latter case RDs can be computed for several different ACRs, which I think I had seen recommended by others on the long Data Methods thread you linked.

(I won’t re-hash arguments for/against particular metrics here, and will leave others to comment on the suitability of specific implementations.)

I will still recommend the same especially if we believe that the RR is not portable across baseline risk and that it may be better to combine both RRs into a OR for use as an effect measure

For use with logistic regression we created the module logittorisk in Stata that generated the risk difference and NNT for any baseline risk after a logistic regression was run

Cochrane suggest that RD = ACR×(1-RR)

We have suggested that
RD= r_0 - [(r_0/(1-r_0 )×〖RR〗(+ve)/〖RR〗(-ve) )/(1+r_0/(1-r_0 )×〖RR〗(+ve)/〖RR〗(-ve) )]

Both give very different results

(note ACR = r_0)

RD = ACR - [(ACR/(1-ACR )×RR/RRc) ÷ (1+(ACR/(1-ACR)×RR/RRc))] [method 1]

compared with

RD = ACR×(1-RR) [method 2]

Since there are no comments, lets take an example
Trial has intervention risk = 4/42
Trial has control risk = 34/41
RR = 0.115 and RRc = 5.299
If ACR is 0.25
method 1 RD = 0.243
method 2 RD = 0.221

Which is correct?

Which is correct?

You are not going to find the answer to that in mathematics, which will never alone be sufficient to guarantee that an effect measure is stable across groups.

In most cases, no stable effect measure exists, meaning that no method is “correct”. In some special cases, a plausible biological mechanism may be sufficient to guarantee approximate stability of a specific effect measure. If that is the case, I suggest you use that one. In other cases, I guess it is also possible to look for empirical evidence for stability, but to make any use of your empirical findings, you are going to have to deal with thorny philosophical issues related to the extrapolator’s circle

You are also not going to find a method that is “always” correct. Nature just doesn’t work that way. You are going to have to deal with each exposure-outcome relationship separately, and evaluate whether your beliefs about that specific exposure-outcome relationship are consistent with a theoretical rationale for stability of any effect measure.

I also want to add that the focus on calculating the absolute risk reduction is a red herring. If you are able to compute the absolute risk reduction from r_0 and r_1, you have absolutely no need for the absolute risk reduction: Any decision maker would prefer to be told r_0 and r_1, which gives strictly more information. If they are using a heuristic that relies on the risk difference, they can easily calculate that themselves, but knowing r_0 and r_1 gives many other options. The focus on calculating the risk difference (and the oft-repeated but only half-true claim that it plays a central role in decision making) just leads to confusion about what effect measures are for. We discuss this in more detail in the preprint discussed in the other thread

2 Likes