# Sensitivity, specificity, and ROC curves are not needed for good medical decision making

Frank, you bring up some good points as to why sens and spec are not useful. Backwards-time backwards-information-flow probabilities/ transposed conditionals resonate most with me.

In literature, sens and spec are described as being properties of the test, independent of prevalence - this is advantage I usually see of sens and spec over PPV and NPV. This does not make much sense to me. It seems to me that if the test is developed and validated using an appropriate population, then the predictive statistics donât need to be independent of prevalence. Could someone explain why this is or is not really an advantage?

I think a ROC-like curve with an AUC statistic would be more useful if the two axes were PPV and NPV. Why donât I ever see this used?

To the last question, it would no longer be an ROC curve

The independence of prevalence is a hope (and is sometimes true) because it allows a simple application of Bayesâ formula to be used, if sens and spec are constant. The fact that sens and spec depend on covariates means that this whole process is an approximation and may be an inadequate one.

This entire paragraph is fantastic, and worth re-reading multiple times. Thank you Frank.

3 Likes

Agree. In my experience these concepts are not of much practical value, and most docs have to look up the terms to interpret studies. Which we do rather poorly.

1 Like

The main criticism Iâm hearing is regarding how we teach the ideas around diagnostic evaluations. Ideally, we would have a probabilistic statement regarding the presence or absence of disease given a test result. The shortcoming of this approach is that we lack predictive models for various combinations of tests and patient factors. In general, a physicians experience is one where a test deemed excellent, good, or poor for meeting some diagnostic criteria. We usually base our assumptions on a test based on how sensitivity/specificity were reported for a test in a select cohort or case/control type evaluation.

Again, I think understanding direct probabilities is important and perhaps emphasizing the points you make are relevant to how we report diagnostic values in the literature. But from a practical sense, we are long way from changing how we clinically interpret test results. The huge spectrum of patient presentations, values, and treatment decisions wonât fit into neat probabilistic tools. I would like to see better integration of personalized risk for both patients and clinicians to guide treatment decisions. But I donât believe this issue will change how we decide to respond to a troponin level vs. non-specific testing like ESRs or d-dimers.

5 Likes

Extremely well said. This should push us to argue for more prospective cohort studies that can provide the needed data for multivariable diagnostic modeling. Patient registries can assist greatly in this endeavor.

2 Likes

There is an important distinction between probabilistic thinking and probabilistic tools. I agree, we are a long way from using tools to model each disease/outcome and would argue that is not practical or helpful. But I actually think we do teach probabilistic thinking, at least in my medical education thus far. For example, we are encouraged to come up with a DDx list of a minimum of 3 diagnosis in decreasing probabilistic order, based on the information available at that time. This is where I think the diagnostic test studies could change and better inform this process. If we were to construct simple models based off the key information available, and provide clinicians with the probabilities given that information, it would help them understand what diagnosis are more and less likely. Further it would give them an idea of HOW likely, which helps determine if further testing is required to confirm the diagnosis. These do not (and should not in my opinion) be specific prognostic tools, meant to be calculated at the patients bedside, but instead by reflections of what the probability is given the test result therefore giving the clinicians an idea of what the test is telling them about their patient.
To my eye, this approach to diagnostic studies actually involves less assumptions then reporting sens/spec, but the trade-off is they involve more interpretation required by readers. It is because of this limitation that I am trying to find ways to improve translation.

1 Like

To check my understanding: suppose we use a model and find p(disease | age, sex). We then do some test; now we want p(dz | age, sex, test), but suppose that model doesnât exist. Hence we can use Bayes rule with the modelâs p(disease | age, sex) as a prior and the sens/spec of the test to make the multiplier. In this case, sen/spec of the test were necessary (?)

Sensitivity and specificity are really functions of patient characteristics such as age and sex, but no one seems to take that into account. So we really donât have what we need for Bayesâ rule either. Ideally youâd like to have an odds ratio for the test (product of LR+ and LR-) that is conditional on age and sex. You could make a crude logistic model if you knew the likelihood ratios. We need more prospective cohorts or registries to derive logistic models the good old fashioned way. That would also allow us to handle continuous test outputs, multiple test outputs, interaction between test effect and age or sex, and many other goodies.

1 Like

So if anything we needed p(test | age, sex)

To make p( dz | age, sex, test) = p(test | age, sex, dz) p(dz | age, sex) / p(test | age, sex) ?

So p(test | dz) or sensitivity is useless here then and in general when the prior conditions on anything

1 Like

Thatâs mainly correct, and now note the silliness of the backwards-information-flow three left turns to make a right turn approach. By the time you get sens and spec right, you have to recognize that they are not unifying constants but that the entire process needed to get P(dz | X) is more complicated than directly estimating P(dz | X) in the first place. So it all boils down to data availability, and lacking the right data, are sens and spec constant enough so that simple approximations will suffice for a given situation. That will depend, in addition to other things, on the disease spectrum and the types of predictors available.

Note that starting with P(dz | X) opens us up to many beautiful features: Treating disease as ordinal instead of binary, having multiple test outputs, allowing test outputs to interact with other variables, etc.

1 Like

I would add a caveat - we are always learning to improve our ability to interpret of tests. Novel tests, test improvements, or even the failure of memory all contribute to need for re-education.

Having a comparison list of tests with LR is much easier to read than a list with sens/spec, and lends itself to better conceptualization of magnitude of directional change, as well as multiplicative effects of multiple different tests.

1 Like

What about âupdatingâ as new information is found? Say a doctor sees a patient for the first time and is worried about risk of MI, so they use a model to get P(dz | gender, cholesterol, smoking status). The patient returns next year as an uncle just had an MI. To now incorporate this new information, isnât using Bayes rule necessary*? I kind of gather from your comments that the answer is no, thereâs a way to just directly estimate the posterior P(dz | gender, cholesterol, smoking status, relative with MI) without Bayes rule, but I havenât been able to work it out. Or are you suggesting to just build P(dz | gender, cholesterol, smoking status, relative with MI)?

*But if there had been a study to get the things necessary for Bayes rule to do this update, they probably could have just made a logistic regression in the first placeâŚ

Why not mine the EHR to figure out what combinations are most common and build models for those?

Yes, or in the absence of such data make various assumptions and use Bayesâ rule to update.

The following comments are an archive of comments entered under an older blog platform and related to the article In Machine Learning Predictions for Health Care the Confusion Matrix is a Matrix of Confusion by Drew Levy. Most of the comments were written in 2018.

Bob Carpenter: Thanks for the post. Iâm a big believer in calibration and weâve been pushing that as a baseline for Stan with our model comparison packages. Basically, we want the sharpest predictions for an empirically calibrated model (following Gneiting and Rafteryâs terminology in their JASA calibration paper â I donât know who invented the terminology in the first place).

With full Bayes, we get calibration for free if our model is well specified; too bad they almost never are, so we have to calibrate empirically.

I believe the fascination of the area under the receiver operating characteristic curve (AUC) for machine learning is because they can use radically uncalibrated techniques like naive Bayes (uncalibrated for real data because of false independence assumptions) to measure systems and not have them look totally idiotic.

As an example, I recently (in the last two or three years) had a discussion with a machine learning professor who was predicting failure of industrial components and couldnât figure out how to do so when (a) some components failed every year, and (b) their logistic regression wasnât predicting much higher than 10% chance of failure for any component. I never managed to convince them that their results might just be calibratedâthey seemed to be getting the prevalence of component failures roughly right. Naturally, someone pointed them to AUC, so they went with that to produce a result other than 100% precision, 0% recall.

As Frank points out, precision (positive predictive performance in epidemiology language) and recall (sensitivity) are deficient because they donât count the true negatives. In statistical language, they are not proper scoring rules.

The reason we care about sensitivity and specificity in diagnostic testing, where we eventually have to choose an operating point, or at least choose decisions on a case by case basis is that combined with prevalence, they determine the entire confusion matrix. They also let us tune decisions when the quadrants of the confusion matrix (like false positive and true positive) have different utilities (dollar values, expected life expectancy and quality changes, etc.).

In real applications, we often donât have the luxury of varying thresholds. Instead, we have to work to build guidelines with good operating characteristics. Actions might be patient or item-specific, but theyâre usually not. Usually itâs âgive the patient a mammogram, if anything looks suspicious, escalate to MRI, and then if it still looks suspicious, go with a puncture test, and if thatâs clean come back next yearâ (even though the first two tests have high sensitivity, low specificity and the last is the reverse). Frank can correct me here, but the guidelines Iâm familiar with only tend to indicate when to start the process. For example, women over age 50 have higher population prevalence of breast cancer than those over 40; which causes a test with a fixed sensitivity and specificity to become more precise (higher positive predictive accuracy, in epidemiology terms). My wifeâs been resisting the follow ups âjust to be sureâ (against medical advice!) from sketchy mammograms now that she knows a bit more statistics. Weâve sadly had a friend go through the not unusual [positive, positive, negative] result sequence for mammograms for two years and then die of breast cancer the following year.

In my applied work in natural language, the decisions of our customers almost always meant selecting an unbalanced operating point. For example, our customers for spell checking and automatic question answering wanted high positive predictive performance (they didnât want false positives), whereas our defense department customers doing intelligence analysis wanted high sensitivity (they were OK with a rate of up to 10 to 1 false positives to positives or sometimes higher depending on importance, if the sensitivity was good).

P.S. The popularity of the Brier score (quadratic loss in a simplex) also puzzles me, though itâs at least a proper scoring rule for classification. Also, itâs not newâit came out of statistics in 1950, about the same time as stochastic gradient descent, another technique that ML people often think was a new invention for big data (âbigâ is relative to the size of your computer!).

The problem with log loss is that itâs very flat through most of its operating region, so despite also being a proper scoring rule, is not a very sensitive measure. So maybe Brier scoring is better. Do you guys have any insight into that? Weâre looking for general recommendations for Stan users.

P.P.S. I had the advantage of my second ML paper (circa 1998 or thereabouts) being overseen by a statistician at Bell Labs, so we took some kind of singular value decomposition approach to dimensionality reduction, used some hokey search-based scoring, then calibrated using logistic regression (we didnât quite know enough to build an integrated system). We were building a call routing application which mattered for the enormous call center of the customer (USAA bank).

Frank Harrell: Wow Bob. What rich comments. These comments should be more visable so I suggest you copy them as a new topic on datamethods.org if you have time. What you and colleagues are doing in the Bayesian prediction world is incredible exciting. Thanks for taking the time to write this! Just a few comments: The Brier score has a lot of advantages and there are various decompositions of it into calibration-in-the-large, calibration-in-the-small, and discrimination measures. But its magnitude is still hard to interpret for many, and depends on the prevalance of Y=1. Regarding sens and spec, I feel that even in the best of circumstances for them, they are still not appropriate. Their backwards-looking nature are not consistent with forward-time decision making. And this raises the whole issue of âpositiveâ and ânegativeâ. I try to avoid these terms as much as possible, for reasons stated int he Diagnosis chapter of Biostatistics for Biomedical Research.

My current thinking is that the types of model performance indexes that should be emphasized are a combination of things such as (1) deviance-based measures to describe predictive information/discrimination, (2) explained variation methods related to the variance of predicted values, (3) flexible calibration curve estimation using regression splines with logistic regression or nonparametric smoothers (I still need to understand how this generalizes in the Bayesian world), (4) calibration measures derived from (3). For (4) I routinely use the mean absolute calibration error and the 0.9 quantile of absolute calibration error. One can also temporarily assume a linear calibration and summarize it with a slope and intercept.

Drew Levy: Bob: this comment terrific: it bumps the discussion up to the next level! Thank you.

A key point that you raise and that I will now pay closer attention to is the difference between prediction applications (the specific focus in this post) and âguidelinesâ: ââŚbuild guidelines with good operating characteristics. Actions might be patient or item-specific, but theyâre usually not.â This may be another prevalent source of ambiguity and confusionâsimilar to the subtle difference between prediction and diagnostics.

For our purposes here (clarifying appropriate methods), guidelines might be construed as classification with implications for a subsequent action(s), but also with the qualification that it is a heuristic subject to modification by additional information (e.g., clinical judgement). The guideline admits to incomplete information that is not part of the model rationalizing the guideline; whereas the goal of prediction is to formally include as much of that ancillary information (the structure and the uncertainty/randomness) for accurate individualized prognostication. And âclassificationâ, per se, seems like yet a different thing as well. It is not hard to see how something like âclassification with implications for a subsequent actionsâ gets confused for prediction/prognostication.

What I am sensing is that the confusion about the specific purposes to which ML and prediction modeling are applied is likely broader then just âprediction vs. classificationâ. There is a lot to unpack here. Elucidating the various objectives, and matching them to appropriate methods, requires more thought, more clarityâand hopefully, eventuallyâsome consensus on definition.

Jeremy Sussman: This piece blames machine learning for a problem that 100% belongs to medical researchers and medical statisticians.

MLâs do not use sensitivity and specificity! (More often they use precision and recall, which are even worse, but itâs unfair to blame ML for techniques they donât use.)

In prediction work for medical journals, weâve published with AUROC and calibration measures. We often include some slightly newer ones (Brier score, calibration slope), but weâve never been asked to and no one has ever told us to avoid AUROC or recommended something better.

Please propose things that are better! Ask us to use them. Get them into places like the TRIPOD statement. But this seems unfair.

Frank Harrell: Jeremy there are two reasons that my experience does not at all resonate with yours. First, I serve as a reviewer on a large number of submissions of machine learning exercises for medical journals. I see sensitivity and specificity used very often. Second, precision and recall are improper accuracy scoring rules, i.e., are optimized by a bogus model. Further, it is well known that the c-index (AUROC) is not powerful enough for comparing two models. It is a nice descriptive statistic for a single model, if you want to measure pure discrimination and donât care (for the moment) about calibration. Soon Iâll be writing a new blog article describing what I feel are the best current approaches for measuring added value of new markers, which relates to your plea for recommendations. Some of this is already covered in a recent talk youâll see at the bottom of my blog main page. Explained outcome variation is the watchword.

Drew Levy: Jeremy, thank you for your comment and perspective. First, I would like to say that âblameâ is not the intent or spirit in which the post was shared. We should all be learning together.

Secondly, I regret that you seem to have missed the main thrust of the piece: the ROC is derived of sensitivity and specificity. I believe that the underlying fundamental mechanics of the AUROC are opaque to most who use it. And that for the purposes of evaluating ML that is intended as a prediction_tool in health care, the AUROCâwhich carries through the conditioning of sensitivity and specificity-- does not afford what people think it does.

If in your work you have employed calibration and other methods for evaluation of model performance that have conditioning and information appropriate for prospective prediction, then you are to be applauded. That sets a very good and encouraging example.

You are correct to insist that alternatives should be provided as well. That will be forthcoming.

1 Like

Frank, sometimes I think we arbitrarily classify measures based on context but should we really do that? They are all inter-related anyway so if we dismiss one we end up dismissing a class of measures. I will take an example from page 43 of Hernan and Robins where heart transplant is the intervention and mortality is the outcome and Pr(death) and n are shown in the cells:

Clearly we can evaluate this using measures of effect as Hernan and Robins did but I could turn this around and consider the mortality as a test of the treatment status. In this case in those where gender=0:
Se = 2/10 and Sp = 9/10
LR+ = Se/(1-Sp) = 0.2/0.1 = 2
Reverting back to a trial RR = 0.2/0.1 = 2
LR- = (1-Se)/Sp = 0.8/0.9 =0.89
Reverting back to a trial RRc = 0.8/0.9 = 0.89
OR = RR/RRc = LR+/LR- = 2/0.89 = 2.25
Od(AUC) â sqrt(OR) â sqrt(2.25) â 1.5
AUC â 1.5/2.5 â 0.6

So if we are to query Se and Sp should we then not do the same for the whole series of measures?

Addendum: I am not suggesting that any of them is good in all contexts e.g. RR is useless as an effect measure but useful as a LR etc