Patients presenting the emergency department (ED) with symptoms the attending physician thinks could indicate a myocardial infarction (MI) will have blood drawn for measurement of troponins. Often this will be repeated after some time. Elevated (about a 99th percentile of a health population) together with a change and some other criteria are used to diagnose an MI. The clinical issue is that the prevalence of MI in such patients is low (2 to 25% depending on health system model) and traditionally many hours have been needed to “rule-out” a heart attack. With newer troponin assays (“high-sensitivity”) enabling measurement of low concentrations and the development of rapid-rule out strategies based on low troponin and our risk score thresholds the field has advanced so that in many patients an MI appears to be able to be safely ruled-out in a clinically meaningful proportion of patients after only a few hours (disclaimer - I’ve been involved in either developing, implementing or testing many of these algorithms). But, there are issues to move away from the “threshold” approach to using troponin concentrations as a continuous variable and thereby, hopefully, being able to better assist the physician & patient decision making. These include:
Typically 30 to 50% of patients have troponin concentrations below the limit of detection of the assay. This introduced a “natural” threshold for that assay. It also means that modelling troponin as a continuous variable is difficult. I’ve tried some approaches but can’t say I’m comfortable with them. What would be the best approach to including troponin in logistic regression or other models?
It would be disastrous for a patient to be sent home from the ED on the basis of some algorithm when they are actually having an MI. For this reason algorithms typically target very high sensitivities (>99%) or NPVs (>99.5%). This, though, means thresholds are often determined on the basis of only 1 or two patients (given the cohort sizes). Intuitively, I think this means we don’t end up with optimal thresholds (ie those with best specificity). While “thresholds” are an issue well discussed on this forum and many would suggest not to use them, I want to run the gauntlet and ask a question on the basis that they will be used. How can we best determine and validate optimal thresholds when there are so few events?
The outcome event (MI) for these troponin based algorithms is itself troponin based. This suggests to me that they will be biased and it makes evaluation of novel assays (non-troponin) more difficult. How can we deal with this?