I should hope (perhaps overoptimistically) that my posts above show my views and answers clearly enough, but again: To some extent the MBx debate reflects the never-ending problems with stat testing, especially overinterpretation without regard to the full uncertainty in the situation or the costs and benefits of decisions under different scenarios.
To repeat what I said far above, I suspect the problem MBI attempted to address could have been solved more clearly and uncontroversially by graphing the upper and lower P-value functions with the interval boundaries marked, or by tabulating the upper and lower P-values at those boundaries (including the null), and then explaining the rationale for any decisions based on the resulting graph or table. In parallel one could graph or tabulate likelihood functions at the interval boundaries so one could obtain posterior odds based on prior odds, for use in decisions.
I think the whole of research statistics on pre-specified treatments or exposures should be upgraded by moving to continuous descriptive presentations, emphasizing and illustrating uncertainties and dangers of each decision before making any decision. That is how I have seen competent expert panels operate. But that requires a fundamental change in how basic statistics is taught and used, a change that has been promoted for decades yet is still not adopted as the standard.
For more of my views about statistics reform see my presentation at the NISS webinar, https://www.youtube.com/watch?time_continue=1962&v=_p0MRqSlYec&feature=emb_logo
with others at
https://www.niss.org/news/digging-deeper-radical-reasoned-p-value-alternatives-offered-experts-niss-webinar
Beyond those generalities, I am not a sports scientist nor do I have any stake either way in MBx past or future. So at this point I think we’re overdue to hear from others on this matter including Frank and also principals in the public MBI dispute (Sainani, Batterham, etc.) to whom I sent the present blog link.