I happened to be scanning the links on Hacker News and found this very interesting (and shocking) article. This opens up a Pandora’s Box of difficult questions for research synthesis and meta-analysis.
A cautionary tale for the ages. Many versions of this story litter the history of medicine. A fundamental historical error in a field (whether nefarious or naive) goes unnoticed for decades, resulting in millions (?billions) of wasted research money, lack of progress, and ultimately, harm to patients. Reminds me of this article:
High profile scientists are adept at securing funding by making a case that their attempt at solving a problem, with slight tweaks, will be different from those of hundreds of failed predecessors who used similar approaches. Maybe it’s difficult for grant reviewers without equally deep subject matter knowledge to challenge such claims…(?) But even without such knowledge, there’s little excuse for reviewers’ failure to periodically step back and take a “birds-eye” view of the historical progress of a given line of research. Someone, somewhere, needs to make the difficult call to turn off the funding spigot when many years of research have led to nothing but dead ends. Whose job is it to say “we need to stop and regroup at this point and try to figure out how we got onto this hamster wheel in the first place”; “did we make a very early/fundamental error in the process” ?