Should one derive risk difference from the odds ratio?

R-cubed: Clinicians should not be assumed to be in an analyst/modeling role. They usually have no meaningful training or understanding of data analysis or modeling, and have no time for that. And in terms of medical topics, they rarely have time for reading more than brief summaries given in editorials in their journals and medical newsletters such as Medscape and MedPage, and in continuing education materials (which look like they are summaries of summaries). They are thus critically dependent on what is shown in those summaries, which is not much. Because of lack of time and sheer volume of literature, few can go further and most of the tiny minority that do (usually med faculty) still retain only what is highlighted in abstracts.

Yet I find many statisticians write as if typical clinicians are somehow sophisticated enough to appreciate technicalities that few could begin to comprehend. Perhaps this is because the statisticians are insulated in medical schools dealing with research faculty, who are typically far more conversant in statistics than the vast majority of clinicians (although, as many can attest, clinical faculty are still far more vulnerable to errors of their statistical authorities, as witnessed by continuing use of correlations and “standardized” coefficients as effect measures in some reports).

Finally, I’ll repeat that the OR controversy has absolutely nothing to do with uncontrolled factors in study design (potential bias sources or validity problems). It would arise even if every study was a perfect randomized trial on a random sample of the clinical target population (as in my example).

1 Like