Answering that gets into a big topic area…
Various transforms of the observed P-value p such as 1−p, 1/p, and log(1/p) have been used as measures of evidence against statistical models or hypotheses (which are parts of models). Part of the interpretation problem is that those statistical models get confused with the theories that predict them. Correct understanding however requires keeping them distinct, because the measures refer only to the statistical model used to derive the p-value; purely logically the measures say nothing about any theory unless that model can be deduced from the theory. Thus (as Shannon pointed out) the information they measure is only information in the narrow syntactic sense of data deviation from statistical prediction; they connect to semantic information (contextual meaning) only to the extent that information gets encoded in the theory in a form that enters into the deduction of the statistical model from the theory. You may see my earlier comments as stemming from this distinction. [Parallel comments apply to other statistical measures in other systems, such as likelihood ratios and Bayes factors, which have huge literatures including math connections to P-values - but again those literatures often confuse the statistical model (which may now include an explicit prior distribution) with the causal network or theory under study.]
The idea of re-expressing P-values as surprisals via the S-value s = log(1/p) = −log(p ) transform goes back (at least) to the 1950s, using various log bases (which only change the unit scale). The transform also arises in studies of test behavior under alternatives, with ln(1/p) as an example of a betting score and a “safe” test statistic.
I’ve authored or coauthored several articles trying to explain the S-value’s motivation and interpretation from a neo-Fisherian (refutational statistics) perspective. The following should be free downloads, and may most directly answer your question:
Greenland, S. (2019). Some misleading criticisms of P-values and their resolution with S-values. The American Statistician , 73, supplement 1, 106-114, open access at www.tandfonline.com/doi/pdf/10.1080/00031305.2018.1529625
Rafi, Z., and Greenland, S. Semantic and cognitive tools to aid statistical science: Replace confidence and significance by compatibility and surprise. BMC Research Methodology , in press. http://arxiv.org/abs/1909.08579
Greenland, S., and Rafi, Z. To aid scientific inference, emphasize unconditional descriptions of statistics. http://arxiv.org/abs/1909.08583
See also this background essay against nullism, dichotomania, and model reification:
Greenland, S. (2017). The need for cognitive science in methodology. American Journal of Epidemiology , 186 , 639-645, https://academic.oup.com/aje/article/186/6/639/3886035.
More quick, basic treatments of key topics in the above articles are in:
Amrhein, V., Trafimow, D., Greenland, S. (2019). Inferential statistics as descriptive statistics: There is no replication crisis if we don’t expect replication. The American Statistician , 73 supplement 1, 262-270, open access at www.tandfonline.com/doi/pdf/10.1080/00031305.2018.1543137
Greenland, S. (2019). Are “confidence intervals” better termed “uncertainty intervals”? No: Call them compatibility intervals. British Medical Journal , 366 :15381, https://www.bmj.com/content/366/bmj.I5381.
Cole, S.R., Edwards, J., and Greenland, S. (2020). Surprise! American Journal of Epidemiology, in press. https://academic.oup.com/aje/advance-article-abstract/doi/10.1093/aje/kwaa136/5869593