A deep philosophical question came up in the OR vs RR thread.

@AndersHuitfeldt wrote:

Blockquote

My personal view is that trading off bias for precision is “cheating” and gives you uninterpretable inferential statistics. In my opinion, the correct move is to accept that the standard errors will be large unless we can increase sample size. I would rather have an honest but imprecise prediction.

As far as I know, trading bias for variance is rational behavior from the perspective of decision theory. Gelman wrote a short post about the bias-variance tradeoff:

Blockquote:

My second problem is that, from a Bayesian perspective, given the model, there actually is a best estimate, or at least a best summary of posterior information about the parameter being estimated. It’s not a matter of personal choice or a taste for unbiasedness or whatever … For a Bayesian, the problem with the “bias” concept is that is conditional on the true parameter value. But you don’t know the true parameter value. There’s no particular virtue in unbiasedness.

Later on he wrote:

Blockquote

I know that it’s a common view that unbiased estimation is a good thing, and that there is a tradeoff in the sense that you can reduce variance by paying a price in bias. But I disagree with this attitude.

To use classical terminology, I’m all in favor of unbiased prediction but not unbiased estimation. Prediction is conditional on the true theta, estimation is unconditional on theta and conditional on observed data.

I don’t believe there is any contradiction between statistical theory and the causal calculus. Hopefully this thread can show the relationship between the two formal theories.

The Wikipedia link has a decent reference section. Not so sure about the body of the entry, though.