Predicting survival of cancer patients using the horoscope: astrology, causal inference, seasonality, frequentist & Bayesian approach

@f2harrell: I think I’ve had enough coffee this morning to clean up my initial post.

To say a skeptical prior would not find an association seems to be related to a frequentist advising someone that his/her a level is too high for a particular context.

Added on 8/17/2020: I should have simply cited an instructive paper by Bayarri, Benjamin, Berger, and Selke on what they call the rejection ratio to demonstrate what I mean. This perspective is useful whether you approach a problem as a frequentist or a Bayesian. I wish it was taught to me in my intro stats class.

Expressing Bayes Theorem (pre-experiment) in odds:
O_{pre} = \frac{\pi_{0}}{\pi_1} \times \frac{1 - \bar\beta }{\alpha}

The value of an experiment (conditional on rejection of the null) is the ratio of power to \alpha, or what the authors call the rejection ratio. We can see that the more skeptical the prior, the smaller \alpha needs to be for an experiment to shift the prior odds. They use the average or expected power, as this is thought about before any data are seen.

In the post-data POV, they use the \frac{1}{-e \times p \times ln(p) } bound to relate the p value to a Bayes factor bound, which provides the highest amount of evidence against the null provided by the data, for any prior. This is the most an honest advocate can assert as evidence in favor of an effect for a particular study.

For any retrospective look at a data set, we can calculate a Bayes’ factor, a Bayes’ factor bound, or a p value. There exists a function that outputs a Bayes’ factor when given a p value; the inverse gives a p value. We can always solve for the prior required when we assert any particular posterior, when given the Bayes’ factor of the data.

After much study, I try to place frequentist reports in a Bayesian context. Robert Matthews (Aston University) has written a few instructive papers on how derive the implied prior when presented with “confidence” (aka. compatibility) intervals. This is his earliest one.

Matthews, R. (2001) Methods for Assessing the Credibility of Clinical Trial Outcomes Drug Information Journal, Volume: 35 issue: 4, page(s): 1469-1478 (link)

Addendum: An educational paper on placing p values in a Bayesian context:

@albertoca: Sander Greenland likes to convert the p value into a proper binary unit of refutation. In this case, 6 bits of information against a model is akin to flipping a fair coin 6 times, and having them all come up heads - 0.5^6 = 0.0156.

Using the \frac{1}{-e \times p \times ln(p)} bound, your p value of 0.01343 converts to a best case Bayes’ Factor Bound of 6.36 to 1 in favor of the alternative.