Many study attempts to measure the years of life lost (YLL) due to COVID-19 (see eg. Arolas et al or Quast et al) using life tables to calculate the number of years lost for each death as the average number of years of life remaining at the age of the death for the given sex according to the life table.
One of the many problems with such approaches is that using life tables only adjusts for age and sex, but people dying from COVID-19 are typically also multimorbid. They’re far from being a random sample of the population, even for a given age and sex, so such naive calculations overestimate YLL.
Thus, Hanlon et al attempted to adjust for this bias by explicitly incorporating the presence of 11 comorbidities in YLL calculation. They were, however, limited by the fact that they had no individual level comorbidity data, so had to rely on cumbersome approximations.
Another problem is that - contrary to what many would think based on the fact that “losing a life year” sounds unequivocally negative - the ideal value of YLL is not zero. This is an expected remaining time which can never be zero (even for a death at 100 years of age it is about 2 years). So how should we interpret YLL? To which number should it be compared?
For that end, Marshall introduced the concept of “YLL norm”, which is just the number of years lost for someone who undergoes the exact mortalities specified by the life table. In some sense this can be considered as an “expected” YLL, without any special, mortality-modifying circumstance, to which actual YLLs can be compared.
But this is also not that simple. Should we simply subtract the norm from the measured YLL to calculate an “excess” due to the investigated factor, such as the epidemic? Rubo et al argue in a very interesting paper that this is not necessarily the case, and, more importantly, there is no general rule. For instance, if we want to measure the burden of a serial killer who specifically targets people above 80 years of age, we could easily end up saying that the serial killer is beneficial to the health (as deaths above 80 years are likely associated with a remaining life expectancy less than the norm). On the other hand, if we investigate a totally hypothetical risk factor, where we have absolutely no causal idea on whether it indeed contributes to deaths, we should subtract the entirety of the norm. The bottom line is, they argue, that what fraction of the norm is deducted should depend on how mono-causal is the investigated factor’s association with death.
Overall, they argue against the application of YLL altogether in this particular instance, as they consider it to be almost impossible to objectively determine what fraction of the norm should be subtracted.
My - small - addition to this debate is that I had access to individual level comorbidity data for all Hungarian COVID-19 deaths (>27,000), so I could explicitly calculate the above values: Different approaches to quantify years of life lost from COVID-19 | medRxiv. (This paper also provides a more detailed summary of the debate, and a literature overview of the published YLL estimates for COVID-19.)
I’d appreciate if you’d share your thoughts on this debate.