Troponin, its use and misuse to rule-out and rule-in heart attacks



It carries more weight according to clinical intuition or according to data? I’ve seen no data justifying that.


Clinical intuition. I honestly never considered whether studies showed this to be a superior strategy. It makes pathophysiologic sense and is how I and all my cardiologist attendings have approached acute chest pain. Patients with hypertension and renal disease often have a chronic static low grade troponin elevation, and while that is a marker for poorer prognosis and certainly shouldn’t be ignored, it doesn’t - in my clinical experience, predict coronary stenosis requiring PCI. I would be very interested to study troponin dynamics if this has not been thoroughly explored. I would never have suspected that something so intuitive to my practice didn’t have data to back it up.


As Elias says - the change makes pathophysiological sense so is in the Universal Definition (although poorly defined in terms of magnitude or rate of change). However, I agree with you Frank that what is needed is an assessment of the measures of peak or latest value and change and how they relate to meaningful outcomes like mortality or stenosis requiring PCI. I’m part of a group who have shared data in the past, so may put this proposal to them (I expect it may need a lot of data).


Prepare to be surprised if other cardiovascular areas are any indication. A big lesson for me back during my days in Duke Cardiology was when cardiac surgeon Bob Jones hypothesized that the change from rest to exercise radionuclide ejection fraction was a predictor of amount of CAD and a predictor of CV death/MI. We found it didn’t predict either one, but that LVEF at peak exercise was the best single predictor we had ever seen - better than the entire coronary tree at predicting time to event.


To complicate mattters even further, the 4th Universal Definition of MI is doing little to help the (clinical) discrimination between Type 1 vs Type 2 MI. I cannot see that a spontaneous coronary artery dissection behaves any different to an intermittent complete coronary artery occlusion from a thrombus at the site of a plaque rupture event. Of course, the pathogenesis is entirely different, but cTn concentrations and dynamic change values will behave very similarly. Good luck in distinguishing between the different types of MI without invasive investigations :wink:


Just catching up here Tom! I agree - the guideline points strongly towards more invasive testing. Indeed, there is even a paragraph on the potential utility of invasive testing to identify atherosclerotic plaque rupture. But, if you don’t see it, are you going to do OCT? Always need to draw the line somewhere and I think given we haven’t shown benefit from investigation and subsequent treatment its right to be cautious!


I previously developed a model evaluating differences in rate of change of serial troponin and patient level characteristics for those adjudicated with type 1 and type 2 myocardial infarction, which I hoped might be useful to predict the diagnosis in practice. However, as you know, there is no independent reference standard to diagnose myocardial infarction, or indeed to distinguish type 1 and type 2 myocardial infarction. Ultimately I felt any regression model would simply reflect the variables which influenced diagnostic adjudication, rather than demonstrating a true association with one diagnosis or another. Others have undertaken and published similar analyses but I am unsure they are valid. Any suggestions or comments? Clinical utility is a separate issue.


You started off with the assumption that change is what’s important rather than most recent value. I have seen no data supporting that assumption. Ignoring for the moment the difficult problem of needed an independent gold standard diagnosis to analyze against, I suggest always started with a regression model that relates f(baseline) + f(current value) to the outcome, where f is a flexible transformation such as from a regression spline. You can see whether the slope of current is minus 1.0 times the slope of baseline, in which case change is an optimum way to capture the two. But I doubt you’ll see that. For many parameters, the current value has about 4/5 of the diagnostic/prognostic information, so any change measure will likely overweight the baseline value.


Thanks for taking the time to reply.

One of the diagnostic criteria for myocardial infarction is a rise and/or fall in cardiac troponin on serial testing - any regression model I create will have an outcome of the adjudicated diagnosis of myocardial infarction and this will undoubtedly include this change criteria. The bias associated with the inclusion of troponin within the reference standard, and a change on serial testing, seems inescapable. I think we have all discussed this issue at one stage or another…

To give some clinical context; the inclusion of change is on the basis of pathophysiological studies demonstrating troponin release and excretion are time dependent phenomena, rather than demonstration of additive statistical value in its inclusion for diagnosis. As a clinician, a single troponin value may guide prognosis but will not help suggest a diagnosis, as a plethora of conditions may cause a significant elevation. The dynamic change on serial testing indicates an acute process and can be helpful to guide further testing (such as coronary angiography for acute myocardial infarction or echocardiography for structural heart disease). If two levels are unchanged (within the imprecision of the assay) we will look for a non-acute aetiology.


Andrew I know you need to examine the change for comparison with other studies and prevailing clinical practice. But I am very worried that the way you described this means that you desire to encode the change into your thinking without questioning the very strong assumptions that change scores make. The fact that rise and fall may be important is not incompatible with the the current value being all-important. But we need to see the data to learn rather than assume.


Happy to question assumptions, I was just illustrating the current situation for clinical diagnosis and diagnostic adjudication from a cardiology viewpoint. The troponin community is certainly not short on data, it would be great to explore this with your input if you were interested.


Very interested. Thanks.


Hi Andrew… I’m looking at data at the moment which has concentrations measured at 0,1,2,3 h + one or two more a few hours later. The rate of change depends a lot on when following the onset of the infarct the first measurement is made (the surrogate for that being onset of pain). Anything after ~12 hours beyond onset of symptoms can be in the excretion phase and change is harder to observe than the earlier release phase. I think that onset of symptoms should come into any model that is considering the prognositic value of change.


Agree John, I included time interval between sampling and onset in my initial attempts. There will be an abstract published in JACC from ACC in 2016 - I think I used a linear mixed effects model with a random term for subject, but could be mistaken. I will check it out but keen to look at this properly with some statistical experts :wink:

#43 Figure 2 shows distribution of troponin concentrations by classification and the estimates + CI are derived from a LME model as described above. I will have the code somewhere. My neck is firmly on the block now!


Interesting discussion. The majority of diagnoses of myocardial infarction are based on a clearly elevated single cardiac troponin concentration. Whilst serial testing will demonstrate a rise and fall, treatment will be initiated on the initial result. Serial testing is helpful in two settings though.

First, in patients with low troponin concentrations at presentation a second troponin concentration that is also low and unchanged helps to confidently rule out the diagnosis. Particularly important in patients with transient chest pain or presenting within 2 hours of chest pain onset. Here we define a plausibly significant change in troponin as >3 ng/L as this is outwith the 95% confidence interval of two repeated measures in a population without disease (based on Pete’s data). This change value is not used to diagnose myocardial infarction so analytical imprecision less important, but rather to identify those who should be admitted for peak testing (

Second, in patients with small increases in troponin concentration where the mecahnism is uncertain differentiating between acute and chronic myocardial injury with serial testing can be useful to guide subsequent diagnostic testing where echocardiography/CMR may be informative in those with chronic injury and coronary angiography helpful in those with acute injury. Here there is little agreement on what comprises a significant change (absolute or relative).

So when considering how best to model this we might need to consider that serial testing and change in troponin is being used for different reasons, and this is based on the initial measurement.


Really interesting data @chapdoc1 . I’m curious did you examine creatinine/renal function as a confounding factor ?

I feel as though there is a disjoint between the statisticians and the clinicians in the discussion above. I think, the reason why the clinicians are interested in change from baseline as well as peak troponin is because not everyone has the same baseline - for example people with renal dysfunction tend to have a higher baseline and perhaps altered kinetics due to the renal excretion of troponin. Bear in mind mild renal dysfunction can be relatively common, and some groups at risk for MI, i.e. diabetics, are also at risk of renal dysfunction.

It occurs to me that one way to deal with this could be to extend the model to include creatinine (or eGFR) as an explanatory variable.


That reasoning would dictate that the initial value be used in an analysis but it doesn’t follow that change from baseline should be relevant. It assumes too much (linearity, slope of baseline value on follow-up = 1.0). The best example I’ve every seen of this general phenomenon is with the most powerful prognostic factor in coronary artery disease: left ventricular ejection fraction at peak exercise. The resting LVEF is important, but change from rest to exercise has no correlation with long-term cardiovascular events. The LVEF at peak exercise has as much prognostic information as the entire coronary tree, and is summative. It automatically takes into account where the patient started at baseline. I realize the physiology is totally different for troponin, as is the timing of measurements, but this example serves as a warning to anyone who takes change as the proper summary of two variables. This is from this paper. And don’t forget this paper.


@f2harrell What do you think of including interaction terms for the baseline and current values for this kind of model?

, I suggest always started with a regression model that relates f(baseline) + f(current value) to the outcome, where f is a flexible transformation such as from a regression spline


Good on you Andrew for attempting to distinguish between type 1 and 2 early - it’s fascinating. When speaking with clinicians I usually ask some questions to try and help drive the model building exercise. These are the ones that came to mind:

  1. What factors do you use clinically to seperate Type 1 from Type 2 and are any of these available in the ED, therefore available to be put in a model?
  2. What clinical value is there in knowing early in the ED if a patient is Type 1, Type 2 or “merely” myocardial injury? (the “so what?” question).
  3. As a follow up to 2 (if the answer is “lots of value”) - how well would you need a model to distinguish between the 1, 2 or myo injury for it to be clinically useful?