There have been a number of threads on the Bayesian perspective on multiple comparison adjustments. This particular one stands out for its dialogue between Frank and Sander Greenland. The entire thread (starting with the first post) is worth study.
An interesting observation – properly adjusted p values will have the same properties as likelihood ratios, in that the probability of misleading assertion decreases as the information increases. Royall gives the universal bound on the probability of a LR favoring B when A is true is \frac{1}{k}.
One of the first meta-analytic procedures reported was Tippett’s min p value method, expressed as \alpha^* = 1-(1- \alpha)^\frac{1}{k} where \alpha^* is the pre-specified Type I error of the combination procedure and \alpha is the Type I error for looking at a single study.
If the smallest p-value of the data set (assuming the tests are independent) is less than the \alpha^* derived by Tippett’s equation, the default assumption that there was no signal in any study is rejected.
Unfortunately, the researcher can only conclude there was at least one study that detected an effect. Other nonparametric combination methods have been developed that are a bit more useful, but that is for another thread.