I think the challenge with a game metaphor is that, for the Bayesian football fan, the chance of a “win” at the conclusion of a game is still a random quantity. The end of the game and declaration of “winner” is kind of arbitrary . Because if, for instance, the game were extended indefinitely, you could still accumulate evidence, tides could turn, etc.

This limitation really underscores philosophical differences in the approach to clinical trials. And issues beyond sequential design! If I power my frequentist study with 1,000 patients and get an HR of 0.85 (p > 0.05), re-estimating the sample size leads to a lot of difficulty calculating the p-value, because I only decided I needed more patients as a result of my significance test.

The Bayesian has no problem saying that their posterior would only get more precise by rolling in more and more patients (provided IID assumptions are met… which is rarely the case in big, long trials). The “end” of a trial is to a Bayesian just a snapshot of belief, and need not generalize to other studies. For a Bayesian’s decision theoretic framework, it suffices to say, “The Pats being very likely to win is a good enough victory for me.” Which, if the study/game were performed/played again, would not necessarily replicate.

This is an ethical issue too, of companies seeking statistical significance by just doing bigger and bigger and bigger trials. Sure if a drug is harmful you hopefully know sooner rather than later. But randomizing patients and waffling about seeking statistical significance (whether you are Bayesian or frequentist) means you are probabilistically denying patients access to the tried-and-true treatments with known risk profiles. For instance, in a study of bananas as analgesics, if there weren’t equipoise, I’d be mad and say give me the damn aspirin!

To me, I prefer the rigidity of formal stopping rules and alpha spending for those reasons.