Thanks for your thoughtful response.
I think this is really the crux of the issue. Some might argue that trying to find structure in randomness is to deny the very existence of randomness…Opinions about whether or not it’s futile to even try to find patterns in chaos seem (crudely) to delineate two ideologies- causal inference epidemiology and statistics.
In fairness, I don’t think that views of the two camps are as black and white as this. Plainly, causal inference proponents don’t deny the existence of randomness and statisticians don’t deny the possible existence of occult cause/effect relationships in the world. Rather, the two groups seem to differ in their opinions about what type(s) of evidence will allow us to make the best “bet” when we are treating our patients, with a view to optimizing their outcomes.
Causal inference epidemiologists seem to feel that the world isn’t quite as random as many believe, and that if we can chisel out even a few more cause-effect relationships from the apparent chaos, then maybe we can make better treatment decisions (?) Maybe there’s some truth to this view, but if the number of clinical scenarios in which it might apply is small (e.g., conditions with strongly genetically-determined treatment responses), then the cost/effort involved in trying to identify such relationships could easily become prohibitive. In the long run, addressing social determinants of health would likely pay off much more handsomely with regard to improving the health of societies. Seeing governments/research funders throw huge sums of money toward what many consider to be a fundamentally doomed enterprise is aggravating, to say the least.
But getting back to the paper linked at the beginning of this thread: Maybe I’m misconstruing, but the authors seem to believe that it’s possible to infer treatment-related causality for individual subjects enrolled in an RCT, simply by virtue of the fact that they had been randomly assigned to the treatment they ended up receiving. For example, they seem to assume that anybody who died while enrolled in a trial must have died as a direct result of the treatment (or lack thereof) he received during the trial. In turn, it seems that this belief stems from a more deep-seated conviction that a patient’s probability of “responding” to a treatment is somehow “predestined” or engrained in his DNA, and therefore will be consistent from one exposure to the next. People who have posted in this thread are trying to point out that this conviction is incorrect. And if this is the underlying assumption on which the promise of “personalized medicine” hinges, then health systems are throwing away a whole lot of money in trying to advance the cause.
If you were to review case reports for all subjects who died (unfortunately) while they happened to be enrolled in a very large, longterm clinical trial, you wouldn’t be surprised to find some subjects who died from airplane crashes, slipping on banana peels, pianos falling from the sky, aggravated assault, accidental drug overdose, electrocution, and myriad medical conditions that were completely unrelated to the treatment they received (e.g., anaphylactic reaction to peanuts, bacterial meningitis outbreak, forgetting to take an important medication…). Events like this are recorded in both arms of clinical trials and have nothing to do with the treatment being tested. Presumably, though, the more deaths that are recorded, the more convinced we can be that between-arm differences in the proportion of patients who died might be due, at a group level, to the treatment in question, rather than simple bad luck.
Even if the tweet referenced above is now deleted, there’s plenty of circumstantial evidence that the author fundamentally believes that patients can be viewed like circuit boards- as though they are intrinsically “programmed” (like a computer) to respond the same way, whenever the “input” is the same.
I’m not a statistician nor an epidemiologist. So I don’t know how to phrase what I’m trying to say using math. But after practising medicine for 25 years, I’m not sure that it’s possible, for physicians to make a better “bet” (in most, but not all, cases) regarding the treatments we select for patients than one that is grounded in the results from well-designed RCTs. This approach seems completely rational to me. Conversely, I perceive innumerable ways that we could fool ourselves in the process of trying to identify engrained “individual responses” in a sea of potentially random events.