In anticipation of a tsunami of ideologically- and financially (via litigation) - motivated B.S. publications coming out of the U.S., it seems important to unearth this old chestnut from 1995. In the hands of people with axes to grind, snake oil to sell, populations to oppress through preventable sickness/poverty/poor education, bad epidemiology can do a LOT of harm (bolding is mine):
A few quotes:
Over the years, such studies have come up with a mind-numbing array of potential disease-causing agents, from hair dyes (lymphomas, myelomas, and leukemia) to coffee (pancreatic cancer and heart disease) to oral contraceptives and other hormone treatments (virtually every disorder known to woman). The pendulum swings back and forth, subjecting the public to an “epidemic of anxiety,”
As Michael Thun, the director of analytic epidemiology for the American Cancer Society, puts it, “With epidemiology you can tell a little thing from a big thing. What’s very hard to do is to tell a little thing from nothing at all.”
As a result, journals today are full of studies suggesting that a little risk is not nothing at all. The findings are often touted in mess releases by the journals that publish them or by the researchers’ institutions, and newspapers and other media often report the claims uncritically (see box on p. 166).And so the anxiety pendulum swings at an ever more dizzying rate. “We are fast becoming a nuisance to society,” says Trichopoulos. “People don’t take us seriously anymore, and when they do take us seriously, we may unintentionally do more harm than good.”
“I have trouble imagining a system involving a human habit over a prolonged period of time that could give you reliable estimates of [risk] increases that are of the order of tens of percent,” says Harvard epidemiologist Alex Walker. Even the sophisticated statistical techniques that have entered epidemiologic research over the past 20 years- tools for teasing out subtle effects, calculating the theoretical effect of biases, correcting for possible confounders, and so on- can’t compensate for the limitations of the data, says biostatistician Norman Breslow of the University of Washington, Seattle.
“In the past 30 years,” he says, “the methodology has changed a lot. Today people are doing much more in the way of mathematical modeling of the results of their study, fitting of regression equations, regression analysis. But the question remains: What is the fundamental quality of the data, and to what extent are there biases in the data that cannot be controlled by statistical analysis? One of the dangers of having all these fancy mathematical techniques is people will think they have been able to control for things that are inherently not controllable.”
So what does it take to make a study worth taking seriously? Over the years, epidemiologists have offered up a variety of criteria, the most important of which are a very strong association between disease and risk factor and a highly plausible biological mechanism. The epidemiologists interviewed by Science say they prefer to see both before believing the latest study, or even the latest group of studies. Many respected epidemiologists have published erroneous results in the past and say it is so easy to be fooled that it is almost impossible to believe less-than-stunning results.
”Robert Temple, director of drug evaluation at the Food and Drug Administration, puts it bluntly: “My basic rule is if the relative risk isn’t at least three or four, forget it.” But as John Bailar, an epidemiologist at McGill University and former statistical consultant for the NEJM, points out, there is no reliable way of identifying the dividing line. “If you see a 10-fold relative risk and it’s replicated and it’s a good study with biological backup, like we have with cigarettes and lung cancer, you can draw a strong inference.” he says. “If it’s a 1.5 relative risk, and it’s only one study and even a very good one, you scratch your chin and say maybe.”
Some epidemiologists say that an association with an increased risk of tens of percent might be believed if it shows up consistently in many different studies. That’s the rationale for meta-analysis- a technique for combining many ambiguous studies to see whether they tend in the same direction (Science, 3 August 1990, p. 476). But when Science asked epidemiologists to identify weak associations that are now considered convincing because they show up repeatedly, opinions were divided - consistently.
What’s more, the epidemiologists interviewed by Science point out that an apparently consistent body of published reports showing a positive association between a risk factor and a disease may leave out other, negative findings that never saw the light of day. “Authors and investigators are worried that there’s a bias against negative studies,” and that they will not be able to get them published in the better journals, if at all, says Angell of the NEJM. “And so they’ll try very hard to convert what is essentially a negative study into a positive study by hanging on to very, very small risks or seizing on one positive aspect of a study that is by and large negative.” Or, as one National Institute of Environmental Health Sciences researcher puts it, asking for anonymity, “Investigators who find an effect get support, and investigators who don’t find a n effect don’t get support. When times are tough it becomes extremely difficult for investigators to be objective.”
In the meantime, UCLA’s Greenland has one piece of advice to offer what he calls his “most sensible, level-headed, estimable colleagues.” Remember, he says, “there is nothing sinful about going out and getting evidence, like asking people how much do you drink and checking breast cancer records. There’s nothing sinful about seeing if that evidence correlates. There’s nothing sinful about checking for confounding variables. The sin comes in believing a causal hypothesis is true because your study came up with a positive result, or believing the opposite because your study was negative.”
As a result, most epidemiologists interviewed by Science said they would not take seriously a single study reporting a new potential cause of cancer unless it reported that exposure to the agent in question increased a person’s risk by at least a factor of 3-which is to say it carries a risk ratio of 3. Even then, they say, skepticism is in order unless the study was very large and extremely well done and biological data support the hypothesized link. Sander Greenland, a university of California Los Angeles, epidemiologist. says a study reporting a twofold increased risk might then he worth taking seriously- “but not that seriously.”