The question concerns work we have recently conducted, looking at risk prediction using a serum biomarker in a study population presenting with chest pain to emergency departments. In clinical medicine, biomarkers such as cardiac troponin are used to classify patients into three categories:
- MI ruled-out
- MI possible
- MI ruled-in
Now let’s say for sake of argument you have a study population of 1,000 patients; prevalence of MI is 17%. Across the entire cohort, MI is safely ruled-out in 36% at first blood draw; ruled-in in 12%, possible in the rest. The criteria for ‘safe’ rule-out is a negative predictive value (NPV) of ≥99.5% or sensitivity ≥99%. The cut-offs used for classification achieve the above performance targets, and misclassify no patient with MI to ‘rule-out’.
We know from prior evidence that patients presenting very early to the ED might yield a ‘false negative’ result on first blood draw (this has to do with the release kinetics of cardiac biomarkers - if we test too early, we might not have given the cardiac biomarker time to rise above detection/decision limits). 40% of patients in our cohort are such ‘early presenters’ - e.g. they had their first blood draw within 3 hours of chest pain onset. None of those were misclassified (falsely ruled-out) in our retrospective cohort study (as none in the entire cohort were misclassified either).
How would you go about describing the uncertainty surrounding this finding? Would you do a post-hoc power analysis to assess how likely it is you would have detected a change in NPV from 100% to e.g. 98%?