I have seen at least two main ways to see priors: Conveniant tools to apply regularization, or a true representation of your prior beliefs.
Personally I think if you really want to commit to taking a Bayesian approach then priors need to be selected to match with your expectations of a given problem. A normal(0, 1) prior might not make sense for a model of the effect of beta blockers on blood pressure.
The trick with the bayes multiple testing sims I’ve often seen is that treatment effects are simulated in a way consistent with the Bayesian idea of parameters being random and then the prior for the model is matched to this exactly. The idea is that this shows that the posterior will always be the “correct” combination of prior and likelihood. This is why Bayesians don’t really need to prove things via simulation, as long as priors and likelihood are right then posterior is an accurate representation of current belief. This will (probably) not be very comforting to frequentists in the sense that they are typically interested in error control, so you’ll find you are talking across purposes a lot of the time.
Also note that both frequentists and bayesians can incorporate prior information via model structure. It might make sense to treat treatment effects as exchangeable via a random effect if, for example, everything is measured on the same scale and you think treatment works similarly for each outcome. Sander Greenland has a nice paper on this from a frequentist perspective I believe (I don’t think for multiple outcomes, but just general use of multi-level models for regularization).