Just a simple Q: Why the interval between upper & lower limit and mean differ?

Dear all,
I have a simple question.
I have reviewed some meta-analyses papers and found out that the interval between upper & lower limit and the mean differs; such as OR 0.85 (0.74-0.98).
I am wondering why these values are different.
It would be grateful if you add any reference about this issue.

1 Like

Martin Bland’s paper on the odds ratio (link) explains this. Shortly, the confidence limits are calculated on the log-odds scale, where they are symmetric. They are then backtransformed to the odds-ratio scale where they are asymmetric. That’s why graphs of odds ratios (and risk ratios etc.) should use logarithmic scales instead of linear ones (see this paper).