I guess what I’m saying is that I don’t think this is really possible without some sort of loss function. How expensive is the test? Who bears those costs? What care is foregone as a result of loss of that money/clinician time? From whose perspective is the decision being made? How long will the test be relevant? What benefit does it provide over the current best-practice (genetic risk + history and environment I assume?). I guess you could try to assume that the OR is directly proportional to utility of the decision, but I have a hard time believing that will lead to good decision making. Even then, the uncertainty related to the decision is more relevant for whether to delay implementation to fund future research to increase precision or implement now and also fund research.
Say that you use some rule that says you shouldn’t recommend treatment if your 95% Credible interval includes 2 (because of some sense that then risks of surgery or risks + costs outweigh benefits). Then in this case you would not recommend use of the screening tool even though it’s more probable to be effective than not. The link above is a seminal paper on exactly this problem (i.e. how to incorporate uncertainty into decision making).
Decision analysis for a given population would be done by a research team and then used by clinicians and patients as part of the decision making process if the patient in front of them is relevant to that analysis.
Some other thoughts that come to mind:
- Your OR is really only a tool to turn an assumed baseline risk into absolute probability for a given population. Making a cut point based on an OR assumes that you have a well enough defined patient population that they all have similar balances of benefits/harms at that OR.
- All of this also assumes your predictions are well-calibrated, your OR of 3.01 can be interpreted causally, etc…