After watching BBR Session 8, I had a general question about effect measures in biomedical research. I understand the mathematical difference between the two, but I’m having trouble understanding when to use an absolute vs relative measure in study/experimental design.
Are there any general guidelines about choosing between an absolute vs relative effect measure? What are good examples of scenarios (pre-clinical or clinical) where an absolute measure would be more appropriate? What about scenarios where a relative measure is preferred?
Relative measures are most relevant when you want to transfer your finding to other populations. Absolute measures are very dependent on the baseline incidence in the population under study, and will always be low if you are studying a healthy population or a low-risk group. That risk estimate would not transfer to a high-risk population. Absolute risk estimates are most relevant when communicating the results to a specific population.
Often one gets accused of reporting relative risk estimates, as they seem more dramatic. A 50% increased risk sounds more scary compared to an absolute risk increase of 0.5%-points from a baseline risk of 1%. However, I think the same argument can be applied the other way around, and those reporting absolute risks may be accused of doing so because they want to downplay, say, a 50% increased risk by pointing out that it only corresponds to a 0.5%-points increase. I think both are very bad arguments.
My opinion is that when reporting results from one population that is meant to be generalized, relative measures makes the most sense. This can always be translated to absolute risks customized to the population of interest when knowing the baseline risk of said population. So I guess always report relative measures, and if relevant also report what this corresponds to in absolute terms. At least when talking about risk, I can’t see a situation where the absolute risk estimates alone is a good solution. But this is my personal opinion only, and I do not have any specific sources in mind. I will follow this thread with great interest!
Thanks for the reply! The discussion of transferring findings to other populations was especially helpful. Would you say this same reasoning applies to continuous endpoints? Absolute vs relative risk for binary endpoints now makes sense, but I’m having trouble transferring your explanation to a continuous endpoint like blood pressure. Although a relative measure seems preferable because of its transferability, because blood pressure has a physical meaning and a limited range of values that can feasibly exist, my instinct tells me an absolute measure could also be used. How would you interpret the differences between absolute vs relative effects for a continuous endpoint?
I don’t subscribe to the statement that relative effect sizes are helpful to transfer findings to other populations. I would be thankful for a concrete example where this really makes any sense in practice.
Relative effect sizes are relevant when it comes to study design (power is related to relative effect sizes), and also to compare different methods, measuring different surrogates for the same underlying variable (e.g. measureing an RNA concentration via band densities in a Northern blot, via fluorescence intensities of spots on a microarray or via Ct values in a real-time PCR: the methods measure on different scales with different technical variances; comparing relative effects may make some sense here to compare the performence of these techniques).
Every health technology assessment (eg NICE in UK, CADTH in Canada) is partially informed by a health ec model that needs absolute effects in the target population. Typically this is done by taking Country specific baseline risk + relative effect measures.
Are you just saying that you don’t think the results of e.g. an RCT generalize to other populations?
The argument of generalization can only go via an assessment of the comparability of the populations that is beyond the data and the statistics. Statistically, an RCT generalizes from the sample to the population it is sampled from (not from one population to another). The population are all subjects that had the same probability of being sampled.
Apart from the fact that there can always be (unknown, unapprechiated) factors that invalidate a risk score obtained for one population in another, I wonder if a risk assessment is comparable when the baseline risks are already considerably different. If they are not, then I don’t see why one should not give the absolute risks direktly (hoping the best that there won’t be other, less obvious factors, invalidating the estimates).
I think both relative and absolute effect measures have their place, and should probably both be reported. In the case of blood pressure, an absolute change of 5 mmHg would have different meanings to a person with a SBP of 130 mmHg compared to a person with a SBP of 200, so I still think reporting absolute measures alone is the worst option. It is also relevant how a clinically relevant change is defined. For bodyweight, a clinically relevant weight loss is defined as at least 5%. And a weight loss of X kg would have different implications depending on the starting weight. So in that case, I would probably opt for relative weight loss as the main effect measure. For continuous outcomes, in general, those starting further from the normal range have a larger potential for change, so comparing directly is always difficult regardless of how the effect is expressed.
Well; decision making usually just needs this: what is the absolute benefit? This depends on absolute risk without treatment and the effect of treatment … See e.g. https://www.ncbi.nlm.nih.gov/pubmed/30789432
Also cost-effectiveness considers absolute gains in e.g. quality-adjusted life-years; not relative
There may be exceptions, like relative weight loss, which makes sense to me
But the absolute effect of treatment depends on the baseline risk, and you won’t know the absolute benefit in a population unless you study that particular population specifically.
Also, on the population level, and for health policy, I would argue that the relative effects are also relevant. Relative effects would translate to things such as health care costs etc. A reduction from 1% to 0.5% risk of a disease would cut disease occurence in a population, as well as associated costs, in half. Not by 0.5%.
For a continuous response variable such as blood pressure this is largely a non-issue as there is only a few choices and these all use absolute blood pressure:
difference in means
difference in medians
entire posterior distribution of difference in means
concordance probability (c-index / Mann-Whiteney U-statistic): P(randomly chosen patient on tx B has a lower BP than a randomly chosen patient on tx A); for a normal distribution this is a simple function of the difference in means and the SD
The fact that relative measures when suitably chosen (ii.e. hazard ratio, odds ratio, not risk ratio) better transport to other populations has been extensively shown. I’ve touched on this in blog articles and course notes. But take a look at the myriad of forest plots shown in clinical trial reports, where you see amazing constancy of odds and hazard ratios. And absolute risk difference or difference in life expectancy vary wildly with baseline risk.
Most of the time there are no interactions between treatment and baseline covariates. When that is true, the relationship between baseline risk and absolute risk reduction due to treatment is simple math and does not require data. See for example this and this.