Lest we forget: “Epidemiology faces its limits”- 30 years later…

In anticipation of a tsunami of ideologically- and financially (via litigation) - motivated B.S. publications coming out of the U.S., it seems important to unearth this old chestnut from 1995. In the hands of people with axes to grind, snake oil to sell, populations to oppress through preventable sickness/poverty/poor education, bad epidemiology can do a LOT of harm (bolding is mine):

A few quotes:

Over the years, such studies have come up with a mind-numbing array of potential disease-causing agents, from hair dyes (lymphomas, myelomas, and leukemia) to coffee (pancreatic cancer and heart disease) to oral contraceptives and other hormone treatments (virtually every disorder known to woman). The pendulum swings back and forth, subjecting the public to an “epidemic of anxiety,”

As Michael Thun, the director of analytic epidemiology for the American Cancer Society, puts it, “With epidemiology you can tell a little thing from a big thing. What’s very hard to do is to tell a little thing from nothing at all.”

As a result, journals today are full of studies suggesting that a little risk is not nothing at all. The findings are often touted in mess releases by the journals that publish them or by the researchers’ institutions, and newspapers and other media often report the claims uncritically (see box on p. 166).And so the anxiety pendulum swings at an ever more dizzying rate. “We are fast becoming a nuisance to society,” says Trichopoulos. “People don’t take us seriously anymore, and when they do take us seriously, we may unintentionally do more harm than good.

“I have trouble imagining a system involving a human habit over a prolonged period of time that could give you reliable estimates of [risk] increases that are of the order of tens of percent,” says Harvard epidemiologist Alex Walker. Even the sophisticated statistical techniques that have entered epidemiologic research over the past 20 years- tools for teasing out subtle effects, calculating the theoretical effect of biases, correcting for possible confounders, and so on- can’t compensate for the limitations of the data, says biostatistician Norman Breslow of the University of Washington, Seattle.

“In the past 30 years,” he says, “the methodology has changed a lot. Today people are doing much more in the way of mathematical modeling of the results of their study, fitting of regression equations, regression analysis. But the question remains: What is the fundamental quality of the data, and to what extent are there biases in the data that cannot be controlled by statistical analysis? One of the dangers of having all these fancy mathematical techniques is people will think they have been able to control for things that are inherently not controllable.”

So what does it take to make a study worth taking seriously? Over the years, epidemiologists have offered up a variety of criteria, the most important of which are a very strong association between disease and risk factor and a highly plausible biological mechanism. The epidemiologists interviewed by Science say they prefer to see both before believing the latest study, or even the latest group of studies. Many respected epidemiologists have published erroneous results in the past and say it is so easy to be fooled that it is almost impossible to believe less-than-stunning results.

”Robert Temple, director of drug evaluation at the Food and Drug Administration, puts it bluntly: “My basic rule is if the relative risk isn’t at least three or four, forget it.” But as John Bailar, an epidemiologist at McGill University and former statistical consultant for the NEJM, points out, there is no reliable way of identifying the dividing line. “If you see a 10-fold relative risk and it’s replicated and it’s a good study with biological backup, like we have with cigarettes and lung cancer, you can draw a strong inference.” he says. “If it’s a 1.5 relative risk, and it’s only one study and even a very good one, you scratch your chin and say maybe.”

Some epidemiologists say that an association with an increased risk of tens of percent might be believed if it shows up consistently in many different studies. That’s the rationale for meta-analysis- a technique for combining many ambiguous studies to see whether they tend in the same direction (Science, 3 August 1990, p. 476). But when Science asked epidemiologists to identify weak associations that are now considered convincing because they show up repeatedly, opinions were divided - consistently.

What’s more, the epidemiologists interviewed by Science point out that an apparently consistent body of published reports showing a positive association between a risk factor and a disease may leave out other, negative findings that never saw the light of day. “Authors and investigators are worried that there’s a bias against negative studies,” and that they will not be able to get them published in the better journals, if at all, says Angell of the NEJM. “And so they’ll try very hard to convert what is essentially a negative study into a positive study by hanging on to very, very small risks or seizing on one positive aspect of a study that is by and large negative.” Or, as one National Institute of Environmental Health Sciences researcher puts it, asking for anonymity, “Investigators who find an effect get support, and investigators who don’t find a n effect don’t get support. When times are tough it becomes extremely difficult for investigators to be objective.”

In the meantime, UCLA’s Greenland has one piece of advice to offer what he calls his “most sensible, level-headed, estimable colleagues.” Remember, he says, “there is nothing sinful about going out and getting evidence, like asking people how much do you drink and checking breast cancer records. There’s nothing sinful about seeing if that evidence correlates. There’s nothing sinful about checking for confounding variables. The sin comes in believing a causal hypothesis is true because your study came up with a positive result, or believing the opposite because your study was negative.”

As a result, most epidemiologists interviewed by Science said they would not take seriously a single study reporting a new potential cause of cancer unless it reported that exposure to the agent in question increased a person’s risk by at least a factor of 3-which is to say it carries a risk ratio of 3. Even then, they say, skepticism is in order unless the study was very large and extremely well done and biological data support the hypothesized link. Sander Greenland, a university of California Los Angeles, epidemiologist. says a study reporting a twofold increased risk might then he worth taking seriously- “but not that seriously.”

6 Likes

I have met many researchers who are good at beautifying language and technology. Although the data analysis is obviously wrong, their exquisite language expression makes people convinced of their conclusions. And they get a lot of research funding and papers. I don’t understand why many famous journals accept these papers when even I, a non-statistics major, can find some mistakes.
A simple example, if eating 50g of fish reduces the risk of myocardial infarction by 20%, then if I eat 500g, 1000g or more a day, will the risk of myocardial infarction be infinitely close to 0? Why do so many researchers and media have no doubts.
A college classmate of mine published a life expectancy prediction paper in a well-known journal, and the university held a press conference for it. A government agency spent a lot of money to make this prediction model into an app, and then when internal staff tested it, they found that everyone calculated almost the same life expectancy, and this number was nearly 20 years higher than the national average life expectancy. But the project has to continue with some patches. He gained both fame and fortune, so who lost?

2 Likes

Supreme Court Justice Oliver Wendell Holmes Jr. said, “Certitude is not the test of certainty. We have been cocksure of many things that were not so”

1 Like

Jiaqi

I can only imagine how frustrating it must feel to be a researcher trying to do good epidemiology in a world where bad epidemiology is everywhere. Good epidemiology (e.g., disease monitoring, descriptive epidemiology) is truly indispensable. Tragically, it’s being actively targeted for destruction in the U.S. Many people will die as a result of these measures.

Unfortunately, observational methods are trivially easy to manipulate. Manipulators include those seeking quick publications for the purpose of career advancement and those intent on generating “evidence” to support their personal socio-political ideologies (e.g., showing the “harms” from vaccines or certain medications, showing the “benefits” of marriage and religious service attendance…). I broached this subject in another recent thread, describing the phenomenon as “crying wolf.” It’s the natural extension of the unfettered practice of risk factor/black box epidemiology.

The dredging of databases to identify frightening harm signals (which, today, usually comprise associations traditionally considered to be “weak” i.e., on the order of tens of percent), followed by post-hoc manufacturing of biological plausibility, has done incredible harm to patients over the past few decades. Researchers who amplify such signals for ideologic reasons or career advancement are either unaware of the harms they inflict on patients, many of whom are benefiting from the therapies they are hell-bent on destroying, or simply don’t care. Generally, those who promote this type of research are not involved in direct patient care and therefore don’t see the consequences of their actions. They have a massive unrecognized blind spot. They don’t appreciate that taking every published weak harm signal seriously would effectively paralyze doctors and patients. We would be unable to treat any condition because we’d be too busy running around all day with our hair on fire after reading the latest headline-grabbing press release. In this world, fear of inflicting a rare, unconfirmed, and nebulous harm would take priority over treating a patient’s real and present suffering and preventing important and common diseases for which, statistically, the patient is at much higher risk without treatment.

I fervently hope that the key messages from the article linked in the original post remain top of mind over the next few years. We are now seeing people with absolutely no scientific expertise gain control over U.S. health systems. These people seek to profit directly from promoting the idea that medical treatments and public health interventions with well-established efficacy and important benefits for patients and societies are instead harmful. They want to sue for massive pay-offs in all directions based on baseless accusations of harm, profit from selling snake oil alternatives, and eliminate the sick and infirm through neglect. Their ultimate goal is to render the U.S. population too sick, poor, and uneducated to ever fight back against tyranny. Now, more than ever, it’s essential that experts retain the courage speak up and differentiate between good/strong evidence and horseshit.

3 Likes

I don’t know the situation in the United States. In my case, research is often used as a means to gain reputation and status. Rather than delving into research methods and creating research results, more people are willing to run a research project, just like running a capital company, letting people with good connections take important positions, forming cliques, and then making the bottom researchers become sacrifices for their promotion. When I was a postdoctoral fellow at a top national institute, I was told by a famous physician in a certain field: “I don’t want to hear your so-called research methods and epidemiological theories. I’m not interested! I hired you to create numbers! Numbers published at academic conferences! Paper numbers! All the numbers that can be written in the year-end performance report!” As for the quality of research, few people care. People in high positions care more about whether you obey his orders. They give instructions without explaining why they make such instructions (when I think back to my doctoral career, I can think of the only academic guidance I received was to read the previous literature! I was also told by people in this forum that this was wrong. But he didn’t know, sadly, this was all the academic education I received). When I insisted on questioning their methods, they are furious: You don’t have to continue this project! (Professor Harrell provided me with a lot of useful guidance in this forum. A simple example is not to use BMI. Professor Harrell told me not to use BMI for no less than three times, but I couldn’t do it. As a result, I feel a huge psychological pressure when I ask questions involving BMI now.) This system is called seniority system (Nenko system). I don’t want to discredit anyone. I have met many excellent researchers and have seen many excellent results, but my current environment is very depressing.

2 Likes

One crucial point omitted from the article above is that under these circumstances, cheating becomes rife. I’d elaborate with some quotes, but seem to have lent out my copy of Turchin’s End Times.

1 Like

In U.S. the mode is to use poor methods while still being fascinated with theories and ideas.

1 Like

Thank you. It pretty much sums up a realization I came to when posting studies reporting on exposures associated with lymphoma (many, dozens of such studies). The message for patients: all that 's published is not gold.

1 Like

If we are concerned particularly about ideological aspects of the next round of methodological misadventures, then the phenomenon of https://en.wikipedia.org/wiki/Lysenkoism also warrants study. To some extent, the claim that such a thing as ‘data science’ can exist apart from the discipline of Statistics amounts to an abandonment of principles in favor of powerful interests — in this case, primarily those of Commerce.

But one can also find recent examples of outrages against statistics flattered even by a first-rate statistician, presumably to curry favor with powerful regulatory authorities.

1 Like

Anybody willing to bet against him?

Anybody care to guess where this invariably “positive” study will be published, since no respectable medical journal will touch it with a ten foot pole?

https://publichealth.realclearjournals.org

It’s almost like this “journal” was created for a specific purpose…

1 Like

Erin, it looks a bit like the Epoch Times of Public Health, doesn’t it? Here’s Peter Gøtzsche’s external review of an antivax Medicaid claims analysis:
https://publichealth.realclearjournals.org/external-article-reviews/2025/03/open-peer-review-of-vaccination-and-neurodevelopmental-disorders-a-study-of-nine-year-old-children-enrolled-in-medicaid/

How to interpret this? Maybe they’re on a quest for higher-quality antivax research? Or perhaps they aim simply to debase the acceptable discourse around such subjects to the level of careless blogging. My own experience doing a half-dozen debunkings over the past decade is that Brandolini’s law might underestimate the level of effort required of the BS’ers. Debunking doesn’t have to be a mind-numbing process, either; the work can be intellectually rewarding if pursued in the right spirit.

It seems likely to me that all the best people will resign from HHS over the coming months, creating a large pool of talent capable — even divided by Brandolini’s supposed factor of 10 — of mounting a serious challenge to such zone-flooding projects. That could be especially true if civil-society groups stepped into the breach to fund those efforts.

1 Like

Yes, these people are all so very, very slippery, aren’t they? This article is an example of professional-level gaslighting.

Given the track record of the author, anybody with two neurons to rub together could only view pieces like this as “plants” for early editions of a journal that is trying to portray itself, right out of the gate, as “objective and neutral.” “See how “reasonable” (i.e., non-extreme) the views of this author are? If he tells me I don’t need to worry about a vaccine/autism link based on this particular study, because it’s fatally flawed, then this means that in the future, when he does tell me to worry, I should really pay attention…” Mark my words- the bait and switch is just around the corner…

Context and history matter A LOT when deciding between a charitable or cynical interpretation of articles like this. Are we really supposed to believe that people whose entire life’s mission, to date, has been to tear down therapies that have helped millions upon millions of people (e.g., HPV vaccines, mental health medications), whose actions have dissuaded untold numbers of patients from potentially life-saving therapies, are suddenly capable of interpreting evidence rationally and objectively?

There’s a “tell” in the article you cited; there’s always a tell. While criticizing a recent publication that suggested a link between neurodevelopmental disorders and vaccines, the author simultaneously states:

“They did several studies and found that nations that require more vaccines for their infants have higher mortality rates in small children. This is alarming and should lead to other studies as a matter of urgency. Since observational studies will always be confounded, we need large randomized trials comparing few vaccinations with many.”

There’s no harm in documenting rare adverse effects of therapies- they do occur. But building a career around blowing exceedingly rare risks completely out of proportion, while simultaneously disregarding the important societal benefits of those same therapies, does enormous damage to public health. Why should physicians ever believe that people with this type of track record are acting in good faith?

3 Likes

Erin, the version of Taubes’s article you uploaded appears to suffer from many OCR errors. On plugging the DOI into Zotero, however, I seem to have obtained something closer to the primary source — in full color, no less. Also available here.


Moreover, in the process I stumbled on this interesting critical reassessment of Taubes:

McCullough LE, Maliniak ML, Amin AB, et al. Epidemiology beyond its limits. Sci Adv. 2022;8(23):eabn3328. doi:10.1126/sciadv.abn3328

2 Likes

Thanks for posting the better version, David. What do you think of the reassessment?

1 Like

I had noted the radon–lung CA association — with surprise that it wasn’t already accepted back then! — when skimming the Taubes piece, but hadn’t thought to consider the whole body of associations retrospectively. So overall, this reassessment added welcome perspective and nuance. I especially liked their term “triangulation of evidence”.

Case in point. It’s staggeringly irresponsible to give this guy a platform. Anybody who treats patients with severe mental illness and witnesses their suffering firsthand will be incandescent with rage at the BS he produces. The editors who let this stuff pass have blood on their hands. They are damaging their reputations beyond any hope of repair.

https://publichealth.realclearjournals.org/external-article-reviews/2025/03/external-article-review-of-efficacy-of-clozapine-versus-second-generation-antipsychotics-in-people-with-treatment-resistant-schizophrenia-a-systematic-review-and-individual-patient-data-meta-analysis/

Have you ever read Whitaker’s Anatomy of an Epidemic Erin? The conventional wisdom of psychopharmacology does seem to require a critical challenge. But Gøtzsche’s magisterial rage-blogging can only discredit serious efforts in that direction. Thus far, my impression of the JAPH proposition is

Collectively, we have more citations than God; here’s a bunch of blog posts from us with DOI’s slapped on them.

David- no, I haven’t read the book, but I’m familiar with the author. As a journalist, not a clinician who actually treats mental illness, his opinions lack credibility (regardless of the number of prizes he’s won). A much more nuanced assessment from someone who is qualified:

NEW- https://www.psychiatrymargins.com/p/what-whitaker-wants-us-to-know-about

2 Likes

I don’t want to get into one of these situations …
image

… but it was of course Whitaker’s arguments and not his ‘opinions’ that I would have been pointing to as interesting. (Ralph Nader was merely a lawyer, after all, not an automotive engineer — and so forth.) That RADAR trial does look interesting tho; thank you for pointing it out!

1 Like

I guess I’m the Canada goose in this showdown :slight_smile:

The Ralph Nader analogy would work if Nader had gone around encouraging Ford Pinto owners to brake suddenly in rush hour traffic…

People without subject matter expertise can, occasionally, be correct (the Ford Pinto really was a bomb on wheels). Experts should always remain humble and open to changing their views when presented with solid arguments, regardless of their source. But with subject matter more complex than an exploding car, there’s a real risk that amateur analysis will end up causing a lot of harm. And this is exactly what has happened here.

When amateur analyses are especially detailed, they can easily convince other non-experts, especially those with contrarian dispositions. But experts will often immediately recognize misrepresentations and inadequacies of cited evidence and massive blindspots. If I were to read a few stats papers and texts, then write a book on RCT design that contradicted the core views of professional statisticians with years of formal training and decades of practical experience, wouldn’t you consider me presumptuous?

Lack of subject matter expertise + complex subject matter + overconfidence/big ego -—> overly-simplistic analysis -—> grandiose claims that run contrary to a more expert/nuanced understanding of the subject —-> public rejection of grandiose claims by actual experts —-> wounded ego -—> ossified ideology underpinned by bitterness/resentment/desire for revenge -—> increasingly aggressive promotion of grandiose claims/assertions of “conspiracies”/life spent grinding axes.

For every 1000 people who think they’re Galileo, 999 aren’t- and they (universally) have trouble accepting it.

3 Likes