It literally takes high school level algebra to re-arrange the equations in this paper, that I have posted at least 2 times in this exhausting thread. Before making a blanket claim that I am wrong, you need to read it.
The authors are all top notch statisticians. I’m confident they can do high school level algebra.
From the highlights:
Pre-experimentally, these odds are the power divided by the Type I error.
When the prior and posterior are in odds format, this is precisely how \alpha was calculated in a GWAS study where James Berger was a consultant statistician.
You need to read the Rejection Ratio paper (cited above) in order to understand the argument, and why your claim about risk ratios vs ratios of odds ratios is wrong.
He describes his reasoning in this video (from about 5:00 to 30:00)
(The relation among posterior odds, prior odds and power to \alpha described at 25:00 - 28:00 mark).
I used similar reasoning in a re-analysis of a “null” meta-analysis claiming “no association” between excess knee ROM and risk of ACL injury for amateur athletes.
There is nothing special about diagnostic testing that has not already been explored in the hypothesis testing literature, where likelihood ratios are derived from frequentist error probabilities.
Related References
You can derive adaptive p values (significance threshold decreases as sample size increases) by minimizing the a linear combination of the error probabilities.
Luis Pericchi, Carlos Pereira “Adaptative significance levels using optimal decision rules: Balancing by weighting the error probabilities,” Brazilian Journal of Probability and Statistics, Braz. J. Probab. Stat. 30(1), 70-90, (February 2016)