Good reading on when "doubly robust" is worth the effort?

Hello everyone,

Every couple of years it seems there is a wave of doubly robust boosters in my field of interest (evidence synthesis/comparative effectiveness supporting HTA submissions primarily). Does anyone have some nice recommended reading on cases where a doubly robust estimator might actually save me/be worth the extra effort in inheriting all of the workflow difficulties of treatment assignment and outcome modelling?

My knee-jerk reaction is:

  1. It doesn’t protect me from variable selection issues since variables needed for adjustment are the same in a treatment assignment/outcome modelling paradigm
  2. The applications I’ve generally seen are something like a logistic regression IPTW with a bunch of simple linear terms and then the same formula in the outcome model.
  3. I’m worried people will just do a bad version of both models (see number 2).

Is there something I’m missing, or is the argument just that it’s not really that much extra effort, might have a benefit, and probably won’t cause harm?

3 Likes

I have exactly the same reservations, and hope that others can provide relevant references. I know there is a paper studying targeted maximum likelihood estimation, a related method, which found that the standard errors provided by that method have a major bias (too low) and need correction by an impossibly computationally expensive bootstrap process. There is another paper out there comparing one multi-step method with ordinary covariate adjustment, finding no advantage. I think the multi-step method was a “doubly robust” method.

What has not been well understood by the doubly robust community is that ordinary covariate adjustment is pretty darn robust to model misspecification. And it tries to do the right thing in explicitly handling outcome heterogeneity within exposure groups.