Should one derive risk difference from the odds ratio?

Doi: Why do you keep repeating that the RR is not portable? We agree so that was never an issue, so why bring it up? We also agree that, unlike the RR and RD, the OR is not mathematically constrained by baseline risks, which is a trivial fact known many decades ago, so why bring that up? We also agree that, because of that fact, when you looked over a vast range of studies crudely with no attempt to stratify them by topic, you saw little association of the OR with baseline risks, especially compared to the RR or RD. That’s no surprise either because such an analysis isn’t much different from looking at measures from randomly generated tables.

As for portability vs. constancy: you are being circular when you state “our commentaries focus on non-portability i.e. even when the true effect is the same…” What is “the true effect”? There are true effects within studies but no singular true effect unless you assume something you labeled “the effect” is constant. But you never give any scientific reason for expecting anything to be constant. Instead you only observe that the OR appears unassociated with baseline risks when you fail to stratify on topic. That does not say the OR is portable let alone constant, unless you define “portability” as mathematical independence from baseline risk, which again is just being circular.

The portability we are concerned with is in the real world, and that comes down to constancy across studies - which is something empirical, not mathematical. So as I see it, the main point of contention is this: the contextual absurdity of failing to account for the fact that portability only matters within topic. Without some extraordinary considerations, we do not transport estimates for the effect of antihistamines on hives to project effects of chemotherapies on cancers, nor do we combine these disparate effects in meta-analyses.

Your collider argument is irrelevant because what matters is what we are after scientifically: Projection of an observed effect or association for one specific treatment and one specific outcome from one setting to another. When we restrict appropriately to portability within meta-analysis, we see that the OR is itself not terribly portable. And that is no surprise either, since in reality, effect measures have no reason to be constant no matter what their scale (apart from very special instances of mechanical models, e.g., ones where RD or survival ratios become constant, which seem to apply to few health & medical settings).

As we warn, any assumption of constancy or portability needs far more justification than the crude analysis you present (crude in the sense of failing to stratify on meta-analysis or topic). And it needs far more justification than just getting p > 0.05 from a test of “interaction”, especially since we know such tests have extremely low power for OR heterogeneity given the sizes of typical trials and effects.

For more considerations along these lines see Poole C, Shrier I, VanDerWeele TJ. “Is the risk difference really a more heterogeneous measure?” Epidemiology 26, 714–718 (2015). There are to be sure other points of contention, but as sketched here and explained in detail in our paper, your portability claims for OR are again in our view simply the consequences of a series of mistakes of using definitions and examples that artificially make the OR look “portable”, confusing mathematical with empirical properties, and failing to understand which properties are scientifically relevant and which are not.

1 Like