A question that has been bouncing around my head for a while concerning the vaunted “clinical cutoff” vs. risk-prediction via forward probabilities.
For this question I’m assuming that, for the hypothetical risk prediction, a statistical model development has been completed and a well-validated model exists. It has been published in Nature and approved by Dr. Harrell, Dr. Collins, Dr. Steyerberg, etc.
I’m more concerned with methods for performing the arithmetic to apply predictions in real practice settings: nomogram, phone app, integration into EMR, etc.
I wonder if the search for cutoffs and “hard and fast rules” for clinical decisions is driven partially by the technology involved in the execution of predictions at the decision-making level. Was there a golden-age of clinical prediction when practitioners used nomograms on a regular basis, or have they always just updated their knowledge with conferences, scanning the top journals in their field, and gaining clinical intuition?
I wonder if we are in this weird in-between place where nomograms are considering old-fashioned, clinical prediction phone apps are too much work/clunky when you have 15 other patients to see in the ER, and the interface between EMR and well-validated prediction models just isn’t there on a technical/programming level. Stated differently, should we allocate some of our energy to program smoother interfaces between the validated models we already have and clinical EMR systems? Or is this a need I’ve just imagined? Are practitioners already using these tools regularly from the top research hospitals to small-town community hospitals?
And, a completely ignorant and not smart-ass (I promise) question: in a context where EMR integration hasn’t been achieved, is using a nomogram derived from a well-validated model so much more difficult to use than a “hard and fast rule” within the context of real practice to justify the loss of statistical precision and integration of patient/clinician risk?