I have seen arguments for estimating sample size based on precision as a goal pop up in a few different places. In BBR, @f2harrell provides the following summary of why this may be of interest:
There are many reasons for preferring to run estimation studies instead of hypothesis testing studies. A null hypothesis may be irrelevant, and when there is adequate precision one can learn from a study regardless of the magnitude of a P-value.
Some other places I have seen this include
-
John Krushke’s blog post on optional stopping arguing stopping based on precision targets results in unbiased point estimates.
- One issue I have with this post is my understanding that if the prior correctly matches the data-generating process then the point estimate should be unbiased.
-
Planning future studies based on the precision of a meta-analysis
And most recently:
- This paper on planning epidemiological study study size based on precision rather than power.
I am attracted to this idea in my own work where I am primarily interested in estimation (and gradually trying to link this to loss functions or probabilities to get people away from p-values), but outside of methods papers I literally never see this done. it seems particularly powerful when combined with Bayesian analysis with sensible priors. Some of the bigger challenges that come to mind for me are:
- Shift in thinking required to think in terms of precision… With a loss function/economic model it’s easier to explain in terms of value of information, but not sure how to settle on “How precise is precise enough.” Is a power curve type visualization the best bet here?
- Typically the examples given result in larger sample sizes (though maybe the curve comes into play here again). Without a loss function, how do you convince people the larger sample is worth while?
Does anyone have any experience actually implementing this approach? Did you find it useful? Do people think it actually provides any benefit?