How is the “non-divergence interval” mathematically semantically different from either the conventional confidence/compatibility interval or a confidence/compatibility distribution (the set of intervals where \alpha ranges from 0-1)?
I can find no reason to argue with @Sander with the use of either the term “compatibility” or the log_2 transformation of the p-value. They are all very short derivations from definitions of stats 101 or information theory. I suggested the use of the language “sufficiently surprising (conditional on model assumptions)” as an alternative to “significance” in the old language thread, as all the statisticians I have come across agree the log transform of a p-value is a measure of information known a the surprisal.
I initially found the idea of evidence being a random quantity rather strange, but I have come to accept it as being a valid perspective, especially in the design phase of an experiment.