Bootstrap on regression models

Blockquote
i am intrigued at how bootstrapping can cause this much of a “significance” difference.

If you transform the P value into bits of information using log_2(p) when p=0.06, your data provided about 4 bits of information against the assert test hypothesis; p=0.03 provides about 5 bits against. So there isn’t as big a difference as the raw p value (and intuition) indicates.

P values are noisy, there is nothing particularly special about the 0.05 “significance” level. Your alpha needs to be judged in the context of power and prior odds.

If you used the probit transformation \Phi(p), that gives a random variable N(0,1) with a standard error of 1 (jump to pages 4- 6 for the justification).

When p=0.06, \Phi(p) \approx 1.56 \pm 1. Conditional on the default model holding, a future p could be as low as 0.005 ie. a \Phi(p) \approx 2.56 or as high as 0.29 \Phi(p) \approx 0.56.

5 Likes