Should one derive risk difference from the odds ratio?

The perspective that @Sander has attempted to explain in a number of posts and papers might be more easily grasped by substituting “weighted average” (or expectation) for “collapsible effect measure.”

In actuarial science, the question on how to derive a fair price for an individual who needs to insure a risk is a fundamental one, and tools to analyze the situation from the group and individual level is known in insurance circles as credibility theory. It originated in property/casualty insurance, but there are adaptations to health. You might be more familiar with the term “partial pooling” of estimates, if I remember some of your comments on meta-analysis.

Credibility methods have very close relationships to Bayes estimators.

In credibility theory, the analyst is attempting to compute a price that characterizes the unique aspects of an individual risk with the historical, aggregate experience of the risk pool in question.

The best estimator is a weighted average of the prices, where “more credible” information (lower variance estimate) gets more weight. By convention, the credibility weight Z is applied to the individual component, and the remainder is derived by computing 1-Z for the class component. A credibility of 0 indicates the individual component is homogeneous, and provides no additional information not available from the class mean. A credibility of 1 indicates historical experience is not relevant, the estimates are heterogeneous.

The process is remarkably similar to random effects meta-analysis.

The notion of “best effect measure” seems to have an analog in insurance as “best loss reserve estimate.” For these, estimates that can be interpreted as weighted averages (aka “collapsible”) are seen as most relevant. None of that seems to stop actuaries from using models such as logistic regression, and then computing the needed estimate from it, as Sander recommends.

A medical analogy might be computing the QALY under treatment adjusted for adverse effects of the treatment, derived from a mechanistic understanding of drug action, for a particular patient.

Related Papers

Venter, G. (2003). Credibility Theory for Dummies. In CAS E-Forum (pp. 621-627).

The following description of “greatest accuracy credibility” would be compatible with the example in @AndersHuitfeldt post describing the use of data in one population to make inferences about another.

The greatest accuracy approach measures relevance as well as stability and looks for the weights that will minimize an error measure. The average of the entire class could be a very stable quantity, but if the members of the class tend to be quite different from each other, it could be of less relevance for any particular class. So the relevance of a wider class average to any member’s mean is inversely related to the variability among the members of the class

Blum, K. A., & Otto, D. J. (1998). Best estimate loss reserving: an actuarial perspective. In CAS Forum Fall (Vol. 1, No. 55, p. 101). (PDF)

After a long discussion of the properties of a “best estimate”, they conclude for actuarial applications (underscore in original, bold my emphasis):

It is the undiscounted, unmargined, unbiased, best estimate of the probability
weighted average of all possible unpaid loss amounts.

There is an extensive discussion of various measures of central tendency, and the mathematical justification for using weighted averages.

Nelder, J. A., & Verrall, R. J. (1997). Credibility theory and generalized linear models. ASTIN Bulletin: The Journal of the IAA, 27(1), 71-82. (PDF)

Norberg, R. (2004). Credibility theory. Encyclopedia of Actuarial Science, 1, 398-406. (PDF)

In actuarial parlance the term credibility was originally attached to experience rating formulas that were convex combinations (weighted averages) of individual and class estimates of the individual risk premium. Credibility theory, thus, was the branch of insurance mathematics that explored model-based principles for construction of such formulas. The development of the theory brought it far beyond the original scope so that in today’s usage credibility covers more
broadly linear estimation and prediction in latent variable models.

Here is a case study from a Society of Actuaries research paper.

Zhu, Z., Li, Z., Wylde, D., Failor, M., & Hrischenko, G. (2015). Logistic regression for insured mortality experience studies. North American Actuarial Journal, 19(4), 241-255. (PDF)

In summary, a logistic regression modeling approach allows use of less but more relevant data to address multiple challenges in quantifying insured mortality

1 Like