As someone who has examined the clinical decision making literature from a number of angles (clinicians, statisticians/mathematicians, as well as philosophers of science) I find the papers written by clinicians frustrating much more frequently than I expected.
Clinicians have gaps in their understanding of statistics and reasoning that they do not realize. I know I had them for a very long while, until I had to understand work written in other quantitative fields.
I gather that they can use the terminology of statistics in a way that these misconceptions are not obvious when they interact with statisticians. But the intuition that statisticians have and the ones clinicians have are not the same.
Example: the most widely cited quote for “evidence based medicine” from Sackett et. al. (1996) (link)
Blockquote
The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.
About a year later, Richard Royall wrote the following in his book Statistical Evidence: A Likelihood Paradigm
Blockquote
“Standard statistical methods regularly lead to the misinterpretation of results of scientific studies. The errors are usually quantitative, as when the statistical evidence is judged to be stronger or weaker than it really is. But sometimes they are qualitative – sometimes evidence is judged to support one hypothesis over another when the opposite is true. These misinterpretations are not a consequence of scientists misusing statistics. They reflect instead a critical defect in current theories of statistics.”
So we have clinicians assuming they understand what “evidence” means in a scientific context, when statisticians of Royall’s capability are much more circumspect. It is also why the problems with p values have persisted for decades.
There are even more egregious examples of medicine not knowing what mathematicians have learned:
Some letters pointed out that this paper rediscovered the trapezoid rule from calculus. (link)
I wish I knew the solution. But the thread here on common statistical errors is a good start.