I am adhering to Harry Crane’s distinction between “academic” probabilities (which have no real world consequences to the reporter when wrong) and real probabilities, that result in true economic gain or loss. This is independent of the idea of verifying a probabilistic claim.
Harry Crane (2018). The Fundamental Principle of Probability. Researchers.One . https://researchers.one/articles/18.08.00013v1
My complaint is that p values themselves are not the appropriate scale when comparing the information contained in multiple studies when the point of reference (ie. “null hypothesis”) is not true (which it never is exactly). This has lead others to suggest different scales that prevent the confusion of model based frequency considerations (sampling distribution) of the experimental design, from the combined, scientific consideration of the information collected with prior information (Bayesian Posterior).
As I noted above, Kulinskaya, Staudte, and Morgenthaler advise reporting variance stabilized t-stats, along with the standard error (+/- 1) to emphasize the random nature of evidence in this perspective. When the sample size is large enough, these stats closely approximate the standard normal distribution, N(0,1) when the null is true. Any deviation from the null reference model is simply a shift from that distribution a.k.a the non-centrality parameter.
The merits of this proposal is connecting Fisher’s information theoretic perspective (post data reporting of a particular study) with Neyman-Pearson design considerations (ie. large sample results), without the cognitive pathologies noted in the literature for close to 100 years now. Frequentist statistics looks less like an ad hoc collection of algorithms in this case.
The other proposal by @Sander is the base 2 log transform of the p value, which provides an information measure in bits. This is merely a different scale compared Fisher’s natural log transform of the p-value for meta-analytic purposes.
Note that both transformations permit the valid combination of information from multiple experiments.
Regarding uniform priors:
Then too a lot of Bayesians (e.g., Gelman) object to the uniform prior because it assigns higher prior probability to β falling outside any finite interval (−b,b) than to falling inside, no matter how large b; e.g., it appears to say that we think it more probable that OR = exp(β) > 100 or OR < 0.01 than 100>OR>0.01, which is absurd in almost every real application I’ve seen.
On a Bayesian interpretation of P values: