I love reading this. I think that statisticians who had very little Bayes in graduate school are the biggest hindrances to teaching more Bayes. Sometimes the more radical of us need to teach newer approaches to non-statisticians so they will pressure the future statisticians they will work with to learn and use newer approaches themselves. I have seen this happen often with teaching spline functions and bootstrapping to future clinical researchers who then put the right pressure on future statistician colleagues to use these techniques.
I’ll provide a flipped example of this: the majority of clinical researchers that I worked with in my previous position put very direct pressure on me NOT to try anything new, in favor of sticking with their preferred “cookbook” approach to statistics (for most papers their preferred recipe was/is: make Table 1 including all descriptive characteristics collected in the registry; present a Kaplan-Meier curve; run a univariable Cox models for all baseline factor; create “final” multivariable Cox model including variables that showed p<0.05 on the univariable analysis). Working with those collaborators, I had very little incentive to learn new approaches - any time spent teaching myself something new was time I could have spent churning out more of their cookie-cutter papers, and the group I was working with had absolutely zero interest in progressing from a methods standpoint - they would not have given me any extra credit for bringing new methods to the table.
I would have dearly loved to have clinical investigators come to me with hopes to use “novel” or at least more thoughtful statistical approaches - that’s exactly the push I wanted / needed to branch out beyond “mean, SD, t-tests, ANOVA, Kaplan-Meier curves, Cox models…that’s how you do statistics in clinical research!”
I fully agree that teaching new/better methods to a generation of clinical researchers will create somewhat of a feedback loop that also pushes statisticians working with them in the future to get familiar with those methods.
Yes I think we need to be bold, and sometimes be choosy about which collaborators we work with. Just as I choose a physician who keeps up with clinical research in her area and I don’t impose my clinical knowledge on her, I expect clinicians to turn to me for my expertise in biostatistics. The cookbook approach to statistics you outlined, which is all too common, is something I have fought my whole career. But it’s still very common, unfortunately.
On the flip side it is frustrating when clinical journals do not appreciate the gold standard methods. Get the top tier journals to raise their standards, the lower tiers will follow, and everyone will be at your door asking how to do things to spec.
This has to be in place because the best methods are not simple and they require funds for statistical support or a significant time investment on part of people who are already too squeezed.
On the positive side I’ve eventually won every argument with journal editors/reviewers. I just had to persevere and to provide references and sometimes example/simulations.
The sort of thing that convinced me to search for the best methods involved examples of results gone quite wrong when the wrong method is used.
There is really no substitute for having to work on real projects with real code and realizing just how many decisions there are to make and seeing how flux in those decisions creates flux in the results.
Pavel, this may seem like a silly question, but can you provide any references for observational studies, clinical trials or randomised clinical trials where it is shown that using a “cookie-cutter” method provides meaningfully incorrect results, which are corrected by using a method with assumptions that are fulfilled in the data? I ask would love to have references to show my clinical colleagues to demonstrate the added value of doing things correctly.
One example: ventricular arrhythmias in my Information Allergy chapter. This is an example where categorization could be said to have killed patients.