I have a bit of discomfort regarding the above usage of surprise that parallel warnings against inversion of data and parameter probabilities:
The surprise measured by -log(p) is surprise at the observed data if the model from which the P-value p is computed were correct. To say “the ends of the CI [that is, the CL] are more surprising than the MLE” could be taken as inverting the -log(p) interpretation into a statement about parameter values being surprising given the data. That would be posterior surprise (the surprise you would have at finding out that the true parameter value was in the posterior tail(s) cut off the given parameter value), which would be defensible if the CI was a credible posterior interval. But that description is incorrect if it was only a frequentist CI.
Also, I think Poole’s three 1987 pieces on the P-value function are not dated in the least and are far better for applied interpretations than is Fraser’s 2019 article:
Poole C. Beyond the confidence interval. Am J Public Health 1987a;77:195–199.
Poole C. Confidence intervals exclude nothing. Am J Public Health 1987b;77:492–493.
Poole C. Response to Thompson (letter). Am J Public Health 1987c;77:880.
In fact I think Fraser’s article is downright wrong when it says:
“…if the p-value function examined at some parameter value is high or very high as on the left side of the graph, then the indicated true value is large or much larger than that examined; and if the p-value function examined at some value is low or very low as on the right side of the graph, then the indicated true value is small or much smaller than that examined. The full p-value function arguably records the full measurement information that a user should be entitled to know!”
-As AG might say: No, no, no! Any conclusion about the true value is a posterior inference which is not at all formalized by a P-value except in instances where that happens to numerically coincide with a posterior probability. And for forming inferences the user should be entitled to know far more than the P-value function, including every detail of the selection and measurement protocols of the study, and critical discussion of every assumption that went into computing the P-values (in my field, few assumptions if any will be exactly correct). In that regard, the sentence “The full p-value function arguably records the full measurement information that a user should be entitled to know!” is the epitome of academic nonsense, even if it is modified with the false conditional “given the model is correct”.