I would see confidence intervals on odds ratios as the optimal endpoint. If we have the polytomous logistic regression, we now have one odds ratio for each outcome group (minus 1). Family-wise adjustment on CI I believe is less typical than family-wise adjustment of p-values, but to me it seems just the other side of the coin.
I would say that choice concerning family-wise adjustment is meaningless prior to decision-making, although this could be argued if you consider what constitutes decision making. During decision making, hypothesis testing at either a family-wise or separate error rate is sub-optimal because neither optimizes relative to cost/benefit. In Bayesian framework, optimal method would be to integrate cost/benefit over the posterior.
Unfortunately, the lack of consensus around family-wise adjustment takes a lot of time and thought energy. You might recommend that everyone always consider a cost function and integrate over the posterior, but often in many cases the potential decisions are unclear to those conducting the research.
So is the best idea to spread information: flip a coin to decide on family-wise adjustment. At least people wouldn’t have to waste time thinking about whether or not to do it.
I think the problem with this is that people actually use hypothesis testing to make decisions. As a result, seems like if you want to reduce the chance that people are going to make really bad decisions based on p-values (posterior probabilities) or confidence intervals (credible intervals), you would want them to be calibrated relative to at least a plausible cost/benefit scenario for your audience.
This is sometimes how I roughly think about whether to adjust for multiple comparisons. Do we expect the error rates to be so unbalanced (e.g., Type 1 errors have so much higher probability) that we need to do something to get the two roughly in the ballpark of each other. If so, then adjust.
I think a good systematic way to suggest analysis be done is: Stage 1: Do not adjust for multiplicity as a general rule as long as there is no selective reporting; Stage 2: Report expected error rates given stated assumptions and/or have a toy decision making example based off of a simple cost/benefit function that seems to fit right in the middle of the typical use-case for your audience. The decision making example could even be constructed where the decision (yes/no on the y-axis) is plotted as a function of some fixed cost (on the x axis) assuming that such a simple cost function could be even mildly representative.