The pointlessness of small effects in epidemiology

design

#1

As a PhD student in epidemiology, I was indoctrinated to value the importance of small effects in public health research, since small effects applied to entire populations can lead to substantial net impacts.

However, years on, I almost never believe the small effects that I see published - and even when I do believe them, I certainly wouldn’t fault anyone else for their skepticism. Given how we currently analyze and publish most data (or at least how it seems to me), small effects are just so likely to be the result of smoke and mirrors that a lot of other things have to be in place for me to take them seriously (strong theory, expert consensus, triangulation, trusted researchers, etc).

With that in mind, a critical facet of epidemiology is its clear link to intervention, which in turn is linked to policy making bodies (I am mostly thinking about public health policy requiring politicians to sign off), who seem to have a hard enough time incorporating evidence into their decision making. And for anything politicized (as public health will always be as it attempts to weigh collective and individual responsibility) small effects that aren’t universally appreciated will always be at risk of being explained away by opponents. So while in theory I still think small effects can be important at the population level, I doubt their practical importance from an intervention/policy perspective.

So…should we abandon pursuit of small effects (and perhaps turn more attention to hunting for so-called big wins and improving implementation of what we already know works)?


#2

While I generally agree, in some scenarios, a small effect is more plausible than a large effect. Not everything can have big effects on outcomes we care about, and large effect estimates can suggest that the effect observed is just noise. Obviously this depends heavily on how the study was conducted, but I’m just saying that “hunting for big wins” is not necessarily a good goal.

Also, what would it look like in practice to abandon the pursuit of small effects? Presumably we conduct studies to estimate effects that we don’t know the precise magnitude of, and we would want people to publish the results of that estimation regardless of whether it turned out to be small or big…


#3

It’s a great point. Large effects are also quite suspicious! :slight_smile:

The comparison should be among effects we suppose are real.

As for how to progress…when you identify a small effect that you think is both real and actually small…move on to new territory.


#4

I certainly understand your skepticism, but I also think small effects really can be worth pursuing. I often think of some examples that Rothman, Greenland, & Lash provided in Modern Epidemiology. For example, the relationship between smoking and cardiovascular disease or environmental smoke and lung cancer. Effects are not substantially large, but they seem to be there and are thought to be causal.


#5

I think it depends on whether i) there’s a public health intervention that could be applied, ii) the evidence of your small effect is compelling enough that it can’t be dismissed (i.e. is the study rigorous enough to believe that the small effect is real), and iii) the small effect scaled up to the population really results in millions of dollars saved or thousands of disease cases prevented.

I think Darren has a background in nutritional epidemiology (poor soul) so let’s take a topical example from that world that keeps floating up periodically: does bacon cause cancer?

(disclaimer: I love bacon and probably would not stop eating it no matter what it supposedly causes, but may reduce consumption to “occasional weekend brunch treat” if I was sufficiently convinced that it significantly increased my absolute risk of cancer)

There seem to be dozens upon dozens of studies of varying quality (mostly poor) that in some way look at this question. Most recent one I can remember of the top of my head which drew some publicity was a study that said processed meat increased the risk of certain type of cancer (bowel?) by 18% (on the relative scale)…but that increases the absolute risk over the lifetime from something like 5 percent to 6 percent.

Now, pretend that I’m a politician hunting votes from the vegan crowd, and I decide that I’m going to latch onto this study and pick a fight against BIG PORK by citing that processed meat (including bacon) causes cancer; in my next action as a public official, I’ll introduce a bill that levies a tax on all processed meat products since it is a known carcinogen.

As Darren alludes, though, the effect is modest on the absolute scale and the study questionable enough that opponents can poke lots of holes in it. Nobody fills out food frequency questionnaires well, the study was not well controlled, we can’t account for all the other confounding factors, etc. Is this study really strong enough evidence to warrant a public health action in response?

Anyway, I suspect that’s what Darren is getting at here:

Serious question: what is the point of anyone continuing to study whether bacon causes cancer? Does anyone believe at this point that this has “practical importance from an intervention/policy perspective” as Darren puts it?


#6

Excellent discussion. To deal with big picture issues, we are stuck because of at least two things:

  • High-dimensional epidemiology, especially genetic epidemiology, has resulted in over-estimation of effects such as odds ratios to such a degree that scholars such as Ionnidis and Ransohoff have advocated that large odds ratios be instantly disbelieved
  • Small effects are easily biased unless the estimand was pre-specified and the exposure was randomized. In general, one could say that the smaller the effect the better the quality of research required to publish it.

#7

I agree, especially with regards to nutritional epidemiology. I’ve written about it as a field here, but also have dissected a few recent published papers including one linking protein consumption to heart failure and another that was spun by many to indicate that moderate carbohydrate consumption is the best.

Apart from some of the well-known issues with memory-based questionnaires, multiplicity and model selection, and residual confounding, I think this is also a field that’s filled with heavy bias. And I don’t mean systematic errors, but rather serious cognitive biases on the part of researchers who study and chase after these small effects.

Unlike other fields of epidemiology, such as pharmacoepidemiology or environmental epidemiology, where it seems very unlikely to me that researchers have a very intimate and everyday connection with the things they’re studying (for example second-hand tobacco smoke), food consumption is universal, and everyone has a belief of what constitutes a healthy diet. Especially nutrition researchers. And I think because the effects are so small, mixed in with noise, and because there’s so much flexibility with estimation methods, I think there’s a lot of room for producing results that are simply nonexistent.

So I think nutritional epidemiology is a very special case, and it’s also a field that faces several challenges given how diets are linked to so many things.


#8

Part of our view of Epidemiology is also skewed by well to do OECD populations.

Their is a need to address a lot of problems in marginalized populations (refugees, developing nations, etc), where the desire for interventions to avoid harmful outcomes are tempered by uncertainty over either identifying the problems or determining which options works.

I would argue that in these settings, even small effects are helpful as they provide more informed decision making.

I think the concern of much epidemiology is an answer in search of a problem, rather than a problem in search of an answer.