Philosophers have written enough papers to fill a library on why this position is logically untenable, and no amount of appeals by physicians to the unique aspects of patient care will overcome this.
My major gripe is 1. the EBM position is logically false, and the vast majority of uncontrovertial treatments would be ruled out by EBM. 2. Other fields uncritically adopt EBM rhetoric and inference rules, leading to nonsense in the peer reviewed literature.
One of the real world cases that drew attention of philosophers of science and ethics involved study of ECMO for newborns in respiratory distress. Richard Royall, a Johns Hopkins professor of biostatistics had this to say on this dogma of EBM from both an ethical and statistical POV. A link to the paper is in the thread.
Blockquote
We urge that the view that randomized clinical trials are the only scientifically valid means of resolving controversies about therapies is mistaken, and we suggest that a faulty statistical principle is partly to blame for this misconception.
To be blunt about this demand for randomization in all treatment contexts, how many infants would need to be randomized to ineffective interventions (and in essence, condemned to die with high probability) in the ECMO case?
Paul Rosenbaum, another statistician, has written a number of papers showing how a carefully done observational study can, by demonstrating insensitivity to unknown confounders, approximates a randomized experiment.
Blockquote
Randomized experiments and observational studies both attempt to estimate the effects produced by a treatment, but in observational studies, subjects are not randomly assigned to treatment or control. A theory of observational studies would closely resemble the theory for randomized experiments in all but one critical respect: In observational studies, the distribution of treatment assignments is not known…Using these tools, it is shown that certain permutation tests are unbiased as tests of the null hypothesis that the distribution of treatment assignments resembles a randomization distribution against the alternative hypothesis that subjects with higher responses are more likely to receive the treatment. In particular, these tests are unbiased against alternatives formulated in terms of a model previously used in connection with sensitivity analyses.
Rosenbaum elaborates on the randomization fallacy in this 2015 paper:
Rosenbaum, P. R. (2015). How to see more in observational studies: Some new quasi-experimental devices. Annual Review of Statistics and Its Application, 2, 21-48. (PDF)
Blockquote
The statistical literature may be misread to say that only the elimination of ambiguity, not its reduction, is acceptable. Such a misreading might result in skepticism about quasi-experimental devices that reduce, but do not eliminate, ambiguity.
He goes on to discuss the technical issues of identification (of model parameters), and that observational studies can still provide information when they might not provide identification.
Regarding Basu’s definition of identification:
Blockquote
The entry defines identifiable to mean in different states of the world … yield probability distributions for observable data that are themselves different. One could misread this statement as saying we learn nothing about \theta unless there is identification, nothing unless there is a consistent test for each level of \theta. More careful than most, Basu is aware we often learn in nonidentified situations.
From a decision theoretic perspective, this reduction in ambiguity might be enough evidence to promote an intervention or change policy. But that is context sensitive and cannot be decided a priori.
Further Reading
https://www.sciencedirect.com/science/article/abs/pii/S0895435617301440
https://www.sciencedirect.com/science/article/abs/pii/S0895435616001475