Hauck et al (1991) say in the JCE that “*The classical criterion is that a covariate is a confounder if it is associated with the exposure and, causally, with the outcome. (Strictly speaking, this is the wording for cohort studies)*.”

Miettinen and Cook have suggested that a covariate is a confounder if the estimate of exposure effect is changed by inclusion of the covariate, what Hauck et al call the “change-in-estimate criterion” or operational criterion.

Hauck et all call covariates that satisfy the operational but not the classical criterion a maverick. Given that with a non-confounding third variable that is prognostic for the outcome, non-collapsible measures may still satisfy the operational criterion, is it not prudent to reconsider the operational definition of confounding?

To me this represents dichotomous thinking and bad statistical practice. I remember a talk at a national statistical meeting that attacked it from the standpoint of arbitrariness by showing that using a “15% change rule” resulted in different decisions on the OR vs. log(OR) scales.

I agree, it is clearly bad statistical practice but my point is that with odds ratios and logistic regression this would also be wrong. As stated recently by Schuster et al "… many clinical researchers are not aware that the use of this change-in-estimate criterion may lead to wrong conclusions when applied to logistic regression coefficients. " Perhaps this applies to all other non-collapsible measures and this is not made clear in textbooks and the wider literature, as commonly we see the use of change-in-estimate or significance testing methods (values set to 0.2 or more) with logistic regression or odds ratios.

Schuster et al suggest that one could compare the unadjusted exposure effect estimate and the estimate from an inverse probability weighted model - which makes sense.

Addendum: Schuster et al made a mistake though. They say that " The noncollapsibility effect is caused by a difference in the scale on which β1 and β′1 are estimated. In linear regression, the total variance is the same for nested models: when the explained variance increases through adding a covariate to the model, the *un*explained variance decreases by the same amount. As a result, effect estimates from nested linear models are on the same scale and thus *collapsible.*" This is what Norton also concluded and seems to be the common explanation for the changing coefficients. What has been ignored here is that in Medicine, logistic regression is usually never about a latent continuous variable and this variance issue is a myth as has been explained clearly by Kuha & Mills.