Should one derive risk difference from the odds ratio?

It seems like there is a general agreement that no modelling strategy dominates others in all cases. So while there may be reasonably strong prior (or expert) information that suggests the logistic model is a good default (ie. the opinion of @f2harrell ), it might not reflect all uncertainty that a skeptical audience might have (@AndersHuitfeldt).

Wouldn’t a principled way to decide this issue be based on model averaging or model selection techniques whether Bayesian or Frequentist? How would someone specify the data analysis plan for this methodology?

I ask this after looking at the application of Bayesian Model Averaging to the issue of fixed vs random effect meta-analysis, and came across this R package: metaBMA: Bayesian (or Frequentist) Model Averaging for Fixed and Random Effects Meta-Analysis.

Related Threads:

Related Papers:

The following discusses the issue from a frequentist perspective, and includes other techniques (penalization). It cites some of the older Bayesian Model Averaging papers mentioned in the link to the Data Methods thread above.

Sylvain Arlot, Alain Celisse (2010). “A survey of cross-validation procedures for model selection,” Statistics Surveys, Statist. Surv. 4, 40-79

https://projecteuclid.org/journals/statistics-surveys/volume-4/issue-none/A-survey-of-cross-validation-procedures-for-model-selection/10.1214/09-SS054.full

Here is another informative paper from a frequentist perspective.

Hansen, B. E., & Racine, J. S. (2012). Jackknife model averaging. Journal of Econometrics, 167(1), 38-46.

https://www.sciencedirect.com/science/article/abs/pii/S0304407611002405

Buckland, S. T., Burnham, K. P., & Augustin, N. H. (1997). Model Selection: An Integral Part of Inference. Biometrics, 53(2), 603–618. Model Selection: An Integral Part of Inference on JSTOR