It seems like there is a general agreement that no modelling strategy dominates others in all cases. So while there may be reasonably strong prior (or expert) information that suggests the logistic model is a good default (ie. the opinion of @f2harrell ), it might not reflect all uncertainty that a skeptical audience might have (@AndersHuitfeldt).
Wouldn’t a principled way to decide this issue be based on model averaging or model selection techniques whether Bayesian or Frequentist? How would someone specify the data analysis plan for this methodology?
The following discusses the issue from a frequentist perspective, and includes other techniques (penalization). It cites some of the older Bayesian Model Averaging papers mentioned in the link to the Data Methods thread above.
Sylvain Arlot, Alain Celisse (2010). “A survey of cross-validation procedures for model selection,” Statistics Surveys, Statist. Surv. 4, 40-79