Hazard ratios vs odds ratios

I was reading this interesting paper: Ten simple rules for conducting umbrella reviews

and this phrase caught my attention:

An exact conversion of an effect size into an equivalent OR may not always be possible, because the measures of effect size may be inherently different and the calculations may need data that may be unavailable. […] Fortunately, approximate conversions are relatively straightforward. On the one hand, the analysts may assume that HRs, IRRs, RRs and ORs are approximately equivalent as far as the incidence is not too large

Does anyone have a reference for this statement? I couldn’t find a paper discussing the magnitude of the incidence to consider it “not too large”. I also always thought that HR and RR (or OR) are different estimators and the conversion between them is not possible but maybe I miss something. I would like to hear your thoughts about it

Thank you!


There is a brief and hopefully clear explanation of the math behind the quoted passage on pages 60-61 of the book Modern Epidemiology 3rd ed. (Rothman, Greenland, Lash, 2008), also on p. 50 of the 2nd ed. (1998). For positive associations let O1 be the incidence odds of the outcome among the treated or exposed group. Then we expect OR > HR > RR > 1 with the OR roughly O1*100% higher than the RR (reversed for negative associations). For example, if the highest odds is 1:9 or 10%, the OR should be within 10% of the RR, with the average HR typically about halfway between the OR and RR under a time-constant baseline hazard.


i am not certain but maybe anne whitehead’s book on meta-analysis of clinical trials is helpful because somewhere in there she likely explains how to convert to a common effect size