Even small (clinically insignificant) effects matter, if we expose a lot of people to them

Consider the following statements:

  • “The drug shortens the duration of the symptoms of common cold with 1 hour (from 1 week with placebo). Although it is clinically insignificant, but hey, there are one million illnesses a year, so if everyone would take the drug, it would mean 125 thousand workdays saved!”
  • “The drug reduces the severity of the symptoms of depression only with 2 points on a scale of 50, which is considered clinically insignificant. However, given that we have at least 100 thousand patients suffering in this disease, this means that treating everyone would result in 200 thousand points benefit, which is definitely clinically significant!”
  • “This oral anticoagulant increases the risk of intracranial bleeding only with 0.1%, which seems insignificant, but given the huge number of patients taking the drug (one million, with a baseline risk of 0.01/year), this ‘clinically insignificant’ effect actually means 10 very severe side-effect caused by the drug each year!”

[Let’s assume for simplicity that these effect sizes are sure, i.e. measured in very large, well-designed trials.]

What are your thoughts on the validity, soundness of such reasonings?

1 Like

My 2 pence:

  • If the drug has a small effect, it is likely that some (small) side-effect will outweigh the benefit. The smaller the effect, the more critically possible side-effects must be taken on consideration.

  • The “significance” of benefit you gave is measured on a “community basis” (the “individual” benefit is insignificant). So it’s fair to balance the possible community benefit with the community costs. If the drug is given to so many people, even relatively cheap drugs will cause considerable costs for the community. How much money is bound by the heath system to save 125 thousand workdays? Or: from which part of the health system do we take away the money to subscribe these drugs and is there a net benefit in doing so?

3 Likes

Each of these examples is a case on its own, and really underlines the importance of understanding clinical context.

Shortening symptoms of the cold by one hour is not going to change patient behaviour. The are not going to go back to work an hour earlier. The appropriate effect size measure would have been working days lost. Symptom duration is a proxy for this, but proxy measures have a long history of backfiring – remember anti-arrhythmia agents and MI?

The antidepressant result is, again, not a real life effect size. Time to remission and one-year outcome are the measures of choice. The scale, of course, is not numeric, and consequently the two point change could be due to reducing scores on a symptom such as poor appetite. Standardised scales are invaluable for making treatment decisions, but do not measure the outcome that the patient and clinician are working towards. Psychiatry is slowly moving from “what’s the matter with you?” to “what matters to you”, so I would wait for a more appropriate measure of treatment outcome.
As to your arithmetic – what would happen if you gave 100,000 potted plants 1 ml of water each? It adds up to a lot of water, but your plants die.

Risks of serious adverse events such as intracranial bleeding are present all around us. Oral contraception increases the risk of deep vein thrombosis and as far as I remember could account for one death per 4,000,000 user-years. Is this an acceptable risk? Pregnancy, by they way, carries a risk of DVT twice as large as that associated with oral contraception.

Risk is also risk in context. If a surgical procedure to remove a cyst from the hand carries a very small risk that the person will lose sensation in the tips of two of their fingers (I’m making this up, but bear with me), they may not mention it to the patient. However, the current ruling from the British Medical Council is that ‘serious’ risk means ‘serious to the patient’, not ‘medically serious’. If the patient were a professional violinist, they would be horrified at the idea of losing sensation in their fingertips, and even though the risk is very small the doctor must tell the patient.

In each case, the results of an analysis of data are only the beginning of a messy process of decision making!

1 Like

Thank you for the comments!

Yes, I agree, that’s one of the important points here in my opinion as well: it obviously doesn’t matter if we compare one million times the risks to one million times the benefit, or we compare the risks to the benefit. Question is: can this question (these questions) really be reduced to such simple arithmetic…?

That was absolutely intentional, I can totally imagine that answers are different to these questions (that’s why I have chosen these examples).

Well, one could argue that 7 out of 8 patients gets better with one hour earlier in the middle of the day, so it indeed doesn’t matter (from this aspect), 0 work hours saved, but 1 in 8 gets better early in the morning, so the drug was the difference between going to work or not, 8 work hours saved – so on average, we really have 1 work hour saved.

Yes, of course we could argue about these, but let’s put these issues aside, and – as a thought experiment – accept that these are the metrics we use, so that we can focus on the other issues of the questions.

Now, that’s very important I think. I intentionally didn’t want to present my own opinion in the opening post, but I also believe that this is one of the essential issues here: is 5x2 = 1x10…? I.e., is five people with slightly better appetite the same as one with much less anxiety, and much better mood? Because such calculations of “200 thousand point benefit for the population” practically assumes this!

Also note the difference between this and the previous example: work hours are just numbers, so usual arithmetic applies, but here I find that we have much more questions about this logic. (But I am absolutely open to any discussion about this…)

That’s exactly the reason why I’ve written “can this question (these questions) really be reduced to such simple arithmetic” to @Jochen (also see my previous remark).

Very good point indeed. However, if we assume that being a violinist is not associated with different chance of being prescribed oral anticoagulation then this doesn’t matter on the population level (averages will apply).

I realize that your question is primarily trying to get at whether the math behind these types of extrapolations is defensible. But from a clinical standpoint, these types of extrapolations, if used to justify an intervention, are often moot.

Patients make decisions based on the probability that they, not the population at large, will see benefit or harm from a treatment. I suspect this is why we’re starting to see that “positive” RCT results in some areas (e.g, cardiology) are not being adopted on a large scale in the clinic. Cardiovascular trials these days have to be huge in order for a new treatment to show benefit, since we already have several medications that reduce risk. So today we see massive trials that show a “statistically significant” benefit from a treatment, but which ultimately don’t impact practice that much in the real world because absolute risk decreases are so small. In my experience, it’s very unlikely that a patient who is already taking four medications for his heart will agree to add yet another if there is only a small chance he will benefit (particularly if he has to pay for that medication and it’s associated with serious potential harms). In other words, any potentially significant “population” impact of the small RCT effect becomes irrelevant if individual patients are unlikely to accept the treatment.

Your question takes on another meaning in the context of decisions that have to be made at the population level (e.g, decision-making bodies that have to decide whether they will cover the cost of a new treatment or vaccine). Coverage for the cost of vaccines is a good example. Today, certain infections are less common than they used to be (e.g, meningococcus, pneumococcus), likely because children these days are vaccinated. Vaccine manufacturers now promote newer variations on these vaccines that will cover residual strains of these bacteria. The absolute incidence of infections with these residual strains is low, so the chance that any given person will see a benefit from the newer vaccines is also low. Probably as a result of this small expected benefit, the cost of the newer vaccines is not covered by our government and they are effectively used primarily by wealthier patients who can afford them.

Don’t know whether these examples are of any interest, but I think you raise an important question. First, is it mathematically sound for authors to extrapolate small effects to a population level in the way you describe? And second, even if these extrapolations were valid, how do they translate (if at all) to individual patients and decision-makers?

1 Like