The Bayes factor bound is the most optimistic assessment of the information from the experiment that distinguishes between the 2 hypotheses.
But before calculating the posterior odds via that relationship to the p-value, we need to rank experiments in terms of their error rates. Berger, Bayarri, and Selke make the case that Bayes Factors can be interpreted in the frequentist sense in terms of relative errors, giving \frac{1 - \bar \beta}{\alpha} – the ratio of expected power over type I error.
We can see that this is correct by considering the “experiment” of coin flips. If we were to answer an empirical question simply by flipping coins, our expected power is 0.5, our \alpha is also 0.5, making our expected Bayes factor 1. That experiment is clearly useless if you want to learn anything.
Experiments have value only when 1 - \bar\beta \gt \alpha.
I have no idea why this POV is not taught in intro stats.