If you think pointing out logical and mathematical flaws in the journals and textbooks that you still accept is “doctor bashing”, that says nothing about me. You are still free to point out an error in my reasoning (as an honest scholar would).
The fact is, most clinicians are simply too busy doing patient care or admin duties to also become competent in data analysis. Nor can you make informed, independent decisions when critical data is not published.
When I did clinical care, I was. What is taught in med school or even CE classes is not enough.
What is enough? Just to become an “Associate” of the Society of Actuaries or Casualty Actuary Society (the agents who make sure risk is managed properly) requires most people (with quantitative aptitude) close to 4 years. Certainly, that is overkill for clinicians, but a calculus based math-stat course is the minimum.
This assumes instructors have an adequate understanding of statistics and mathematics. I think the past 100 years indicate they do not.
Bernardo, J. M. (2003). [Reflections on Fourteen Cryptic Issues concerning the Nature of Statistical Inference]: Discussion. International Statistical Review/Revue Internationale de Statistique, 71(2), 307-314.
Established on a solid mathematical basis, Bayesian decision theory provides a privileged platform from which to discuss statistical inference.
When I pointed this out in another thread, this was the reply:
"Bayesian decision making” it’s not very common in med research, as far as I can see. And it is also not very commonly meant in intro statistics books
Contrast this with Senn’s quote from a guest post on Deborah Mayo’s blog:
Before, however, explaining why I disagree with Rocca and Anjum on RCTs, I want to make clear that I agree with much of what they say. I loathe these pyramids of evidence, beloved by some members of the evidence-based movement, which have RCTs at the apex or possibly occupying a second place just underneath meta-analyses of RCTs. In fact, although I am a great fan of RCTs and (usually) of intention to treat analysis, I am convinced that RCTs alone are not enough.
I don’t like arguments from authority, but I’ve cited enough statistical experts that anyone who thinks I’m incorrect is honor-bound to give an explicit logical argument refuting my claim.
Using pre-data design criteria as an ordinal, qualitative measure of validity for an individual study has no basis in mathematics.
Goutis, C., & Casella, G. (1995). Frequentist Post-Data Inference. International Statistical Review / Revue Internationale de Statistique, 63(3), 325–344. Frequentist Post-Data Inference on JSTOR
The end result of an experiment is an inference, which is typically made after the data have been seen (a post-data inference). Classical frequency theory has evolved around pre-data inferences, those that can be made in the planning stages of an experiment, before data are collected. Such pre-data inferences are often not reasonable as post-data inferences, leaving a frequentist with no inference conditional on the observed data.
This is why James Berger (who literally wrote the book on statistical decision theory) went through great effort in working out conditional frequentist methods in the realm of testing. The paper is close to 30 years old, but we are still debating about proper interpretation of p values.
You cannot improve clinical research without an understanding of decision theory, which links design of experiments (and the value of information) with the broader context. The fact that EBM went off and developed heuristics completely ignorant of well established mathematical results always seemed suspicious.