As explained so nicely in this statistical analysis plan for HCQ in COVID-19,
The reason that Bayesian inference is more efficient for continuous learning is that it computes probabilities looking ahead—the type of forward-in-time probabilities that are needed for decision making. Such probabilities are probabilities about unknowns based on conditioning on all the current data, without conditioning on unknowns. Importantly, there are no multiplicities to control. This is one of the least well understood aspects of Bayesian vs. frequentist analysis, and it is due to current probabilities superseding probabilities that were computed earlier.
Traditional statistics has multiplicity issues arising from giving more than one chance for data to be extreme (by taking more than one look at the data). It is the need for sampling multiplicity adjustments that makes traditional methods conservative from the standpoint of the decision maker, thus making continuous learning difficult and requiring larger sample sizes. The traditional approach limits the number of data looks and has a higher chance of waiting longer than needed to declare evidence sufficient. It also is likely to declare futility too late.
It seems to me that one motivation for the longstanding emphasis on one (or a very few) ‘primary outcome(s)’ is to avoid problems of p-hacking, garden-of-forking-paths, etc. These worries seem quite similar to concerns about multiplicity. Does Bayesian analysis allow for infinitely many outcomes, just as it allows infinitely many looks at the data?