Statistics Reform

I’ve always seen your research program as a model, Frank! Your methods work has a very strong connection to relevant topical projects, and is therefore relevant and useful for actual statistical practice! I also really like the way you and colleagues developed things at Vanderbilt with embedded statisticians. The optimal setting is synergy between applied work and methods development with credit given for both.

I strongly agree about funding. Getting methods work funded through substantive/topical grants is very effective, and leads to more relevant methods problems. It means that we’re answering methods questions that move substantive research forward as well. This is particularly relevant for large network grants. When we developed CNODES (www.cnodes.ca), Samy Suissa insisted that we retain ~5-10% of the project funding for methods work. This has been a great source of methods funding, and there is a nice synergy between our methods work for CNODES and the drug safety work we do for Health Canada. It’s also a fantastic way to train applied statistics students – they see early in their career what it’s like to participate in substantive projects and develop methods problems organically.

3 Likes

i did a phd in medicine instead of a phd in statistics for that reason; my supervisor was a cardiologist, not a statistician. I’m not sure young statisticians are aware of this option, i’m not sure how long the option has existed(?), but there seem to be only a a few uni’s around the world that offer a phd in medicine for biostatisticians. You have to be confident in biostats, obviously, but you are completely free to do applies stats and develop methodology simultaneously. And the outcome is papers in medical journals and a wider readshership and the potential to influence the practice of stats, rather than something buried in stats in medicine. I think stats in med and smmr are coveted, but why? it’s just for prestige, no clinician will ever read it and they are the decision makers, they are the ones, mostly, who run the AROs

4 Likes

I have no idea! :slight_smile: I have to say that I am generally wary of the push to publish undergraduate research. It’s just impossible to have any kind of quality control in that scenario. By all means, teach research and provide opportunities for experience, but we need to stop trying to pretending that most of it isn’t clumsy (as it should be!).

1 Like

This is my understanding. At the end of the day, any call for better research will inevitably lead to less research, which is an unacceptable outcome for any group relying on politicians to send money their way.

1 Like

Yes, I have found myself in a really lucky position where I have some job security without having to chase my own grants (we still do, but I don’t live or die on it) so I am free to collaborate on other people’s stuff, and conducting training for things like open science, FAIR data, R workflows, etc. The problem is down the road though - will I eventually be in a position for promotion to full professor, based on how it’s done here? Right now, I’d call it a long shot. And I have no interest in developing novel methods (probably due to lack of aptitude!) so I can’t hang my hat on that either. So in most cases, asking someone to be a purely applied, collaborative statistician is to ask them to committ career suicide - plus people don’t want to fund ~permanent positions but would rather treat it like they do Postdoctoral Researchers, and nobody is going to choose that life over industry/govt/ngo.

2 Likes

Editorial suggestion: highlight text you are replying to and see the Quote button appear. This will paste highlighted text with appropriate markup into the active window, or create a new active “reply” window.

1 Like

The reason why I think this happens, as I’ve pointed out above, is that there is no pressure to employ statisticians as permanent or long-term members of a research lab/team. Compare this with Bioinformatics: research groups that rely heavily on 'omics data can’t do without dedicated bioinformaticians, because they would be unable to process and analyse all that data in a reasonable timeframe without those with the necessary skillset. So they have to put money into it. They have no option. But in a publishing environment in which most editors and reviewers alike know little of statistics, all you need is p < 0.05. No pressure to spend money on statisticians when study design and data analysis isn’t seriously evaluated when submitting a paper to a journal. The situation for applied statisticians working specifically on data analysis as opposed to method development will only change when publishing culture changes, if ever.

2 Likes

i’m in two minds about whether editors should demand the programming code from authors. It would reveal that the authors are using point-and-click from drop down menus in spss to do analyses and don’t really understand the code behind it, and it would then put pressure on researchers to employ statisticians to minimise embarrassment and ensure data integrity

3 Likes

My observation based on reading so many statistics related articles is that there is a need for a logic handbook designed specifically for statistics and even mathematics. I don’t think their logic is covered sufficiently in training.

1 Like

Not sure if I agree. We live in a rational world where everything needs to be quantifiable, so institutions/funders/government can evaluate whether the investment payed off. We won’t be able to change that, but we may be able to change how things are quantified. Why aren’t we weighting publications with a quality score?

A research quality score could give higher weights to publications with published study protocols. Your research could get a higher weight if you can demonstrate an independent group succesfully replicated your findings. We may even give a higher score to validation or replication studies, to compensate for the fact that these are hard to publish.

I think such a score would have to have a broad outlook, applicabile in very different scientific fields (so not just focus on stats methods). For things to improve, doing the right thing must be made more attractive…

2 Likes

Who should conduct the score-keeping? Thanks.

1 Like

There is too much research being done. As Doug Altman famously said we need to concentrate funding on the better research. https://www.bmj.com/content/308/6924/283

5 Likes

In a recent lecture at NIH, John Ioannidis mentioned that there were an estimated 180 million research articles out there: from about 2009 onward.

1 Like

Yes, to quote him “Abandoning using the number of publications as a measure of ability would be a start.” Hence my thinking: if one high quality paper (quality in methods, regardless of stat significance) counts as much as five bad ones, numbers of publications will drop, and funders will have an incentive to stop favoring hyped topics over good methods. What is wrong with my reasoning?

3 Likes

If the goal is to improve statistical training in non-statisticians, my suggestion would be to create a framework or approach that can teach important concepts and applications with less formal mathematics. Can we educate someone whose knowledge base is equivalent to a high school (or secondary school) graduate?

Here is an interesting example where elementary school children in Uganda were taught to assess reliability of health claims:
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(17)31226-6/fulltext

4 Likes

Ellie Murray once said something on Twitter like “If the goal is to teach, that’s homework, not research.” I very much agree that encouragement/requirements for students and trainees to produce a published article (or more…) has a lot of negative effects. For one, it creates a lot of bad papers, and it also gets them started with a lot of bad habits so even as they advance in their careers they’re often still working on pretty bad papers.

The reasoning is sound; the execution is likely to be the challenge. As Sameera said…

3 Likes

Haven’t been following really closely here, but saw this and seemed germane…

Amrhein V, Greenland S, McShane B. Scientists rise up against statistical significance. Nature 2019; 567 :305. doi:10.1038/d41586-019-00857-9

2 Likes

It didn’t take much as I work with an ED consultant and am regularly in the ED… I just needed the permission of the head of ED to mirror the consultant for a day. Motivation is simple - if I’m tasked with trying to help a clinician’s decision making I must first try and understand where they are “at”, what the barriers and difficulties they face are, and how data is being collected.

3 Likes

Very wise. Thank you!

there was a plea on twitter as follows:

It would help if more MDs and scientists would stand up against fake "peer-reviewed" papers and non-scientific publishers that spread false information. And by commenting on @pubpeer, although I would recommend anonymously :-)

— Elisabeth Bik (@MicrobiomDigest) March 30, 2019

does anyone here use pubpeer? anonymously or otherwise? I have no experience with it. Is there much value a stato can provide aside from saying eg “use fewer decimal places when reporting p-values”?

2 Likes