I am a surgeon. I have a year of graduate level epi/biostats training and while I feel comfortable with some methods, I always feel like a discussion with a fully trained epi/biostats person helps (at the very worse I come away with no additional info). However, and particularly in the surgical literature, well meaning (and some not so much)surgeons and physicians perform research that is sometimes bad, and at other times woefully negligent.
This obviously probably happens less in some fields where stats is a core part of training, but I think suggesting that having an MD prepares you fully in methodology is laughable. Even my minimal training did more to expose me to possibilities, limitations, and the basics more than it prepared me to take on every possible situation.
So here I am asking if every medical paper should have a statistical review. I am not quite suggesting that statisticians tear everything apart, but even a gross review of major problems would be useful. Perhaps statisticians could also help make a paper more clear by requiring assumptions be clearly stated.
I am sure this has been discussed before in other locals, but really I would love to hear what the community thinks. It is possible this is overkill, but also possible that it would greatly reduce the trash that gets published and help to improve the quality of the literature greatly.
BTW: I think this would be a great idea, and would support it.
Iām glad you opened this topic. Being a statistician Iām biased in favor of much more frequently having statisticians review medical article submissions. The question is which level of statistician. I think that MS equivalent or higher is warranted. But I also feel that many things get by that are really under quantitative reasoning more than they are under statistics. For example, when should percentages be used vs. raw data? If percent change is used should it be computed on raw patient data or only on aggregate summary statistics (usually the latter IMHO). So we really need quantitatively literature reviewers first.
Iām surprised how eager authors are to please reviewersā demands, and thus my worry would be that stats reviewers will make unreasonable requests, and authors will fulfill them. Statisticians disagree on many issues, and can appear uninformed etc, and if thereās no statistician on the authorās side who is inclined to push back then the reviewer, as gate-keeper, can impose their own personal habits wholesale. I think you need a stat on both sides and they should maybe bicker with one another to achieve the best results
Absolutely agree that responding to reviewers is difficult. Pushing back can be difficult, especially if unsure. I guess my hope is that knowing every paper you submit will have a statistical review would encourage authors to enlist a statistician at some point (hopefully earlier rather than later).
I would love to hear other thoughts about difficulties with this idea.
Iām not familiar with the procedures across all journals but I do know that Lancet journals seek a statistical review for all research articles. This is often the undoing of many papers, even major trials.
Since it is much easier to learn what not to do that it is to learn what to do, letās at least make sure clinical reviewers study these statistical errors to avoid.
Iām glad that you brought this up. After stewing further on this paper (and of course the fact that dozens like it appear every week), I too was thinking to myself āWhat can I do besides start angry Twitter threads to actually do something about this?ā I have a proposal (at the bottom of post).
The surgical literature often goes there, yes, but it is far from alone.
Oh, no, rest assured, this happens everywhere.
Of course I, too, feel that every medical paper should have a statistical review. I appreciate that statisticians may disagree over what the optimal analytic approach typically is, and therefore my bar for this review is not necessarily āmake the authors do everything the way I would have done itā so much as āmake sure that the authors have presented their data in a reasonable away and used a statistical approach that is appropriate to draw the conclusions given.ā Others may disagree in that regard, but it would certainly be nice if we could at least get everyone to that baseline and stop the flood of āHereās a logistic regression with all p<0.05 variables in a multivariable modelā papers that appear thoughtlessly and apparently uncriticized by the journal editorial staff even when the analyses are clearly absurd.
Which brings me to my suggestion/thought:
Perhaps the major statistical bodies (ASA and RSS for a start) should offer a ācertificationā or something of that nature to journals that have statisticians on their editorial staff and guarantee that all papers have a full statistical review. Sure, right now I know that J Thorac Cardiovasc Surg has all papers get a statistical review while J Am Coll Surg doesnāt, but your average reader doesnāt know that and could easily take the data in JACS papers just as seriously as JTCVS papers (not that JTCVS papers are perfect, mind you, but at least they have had a statistical review!) without considering abominations such as the one which inspired my Twitter rant yesterday. Perhaps offering at least an acknowledgement of journals that have statistical review would give them a slight credibility edge over journals that clearly do not have statistical reviewers.
It may feel like a hollow gesture, and itās certainly not a perfect solution, but we (the statistical-methods-matter-in-academic-medicine crowd) must start with something more than just blog posts and Tweetorials.
The Lancet does; most of the higher-level journals in the US probably do as well; but there is an entire class of second- and third-tier journals, indexed in PubMed with nontrivial impact factors, that do not.
That many journals donāt engage stats reviewers is quite alarming, given the quality of the statistical acumen that crosses my desk. I think your proposal above (ie, some sort of certification of stats review) is a good start. But it wouldnāt surprise me if high-impact journals are accused of having self-serving motivations behind flagging their āsuperiorā approach to peer review
Thatās probably true, but there are two obvious retorts to this:
How can one argue that fulfilling a requirement to have a statistician review quantitative papers is a bad thing? If you agree that itās a benefit to have statistical review thenā¦
Get it yourself.
Iām not saying the ASA or RSS would charge anything. Simply that journals which fulfill a requirement to have at least one statistician on the editorial board as well as a large enough roster of statistical reviewers that all outcomes manuscripts get a statistical review would get a little seal of approval that their journal complies with that minimum reasonable standard. As I allude above, Iām not expecting every paper to be read and scored by Frank Harrellās exacting standard - at least having all papers reviewed by a qualified statistician to flag to āHey, the authors used logistic regression with 15 variables when they have 21 events, they probably shouldnāt do thatā would be a start.
Perhaps only the upper tier of journals would be able to get statisticians interested. Hey, thatās fine. At least we have taken a first step towards identifying journals whose results are more likely to be credible.
Iām certainly aware that there are possible drawbacks here, too. Just thinking out loud of what we (I) can do other than Tweet snarky GIFs about poor-quality papers that get published and passed around as though they prove something when they do no such thing.
Thank you for your replies and suggestions. Iām always looking to expand my pool of statistical reviewers. At least in other fields,there are reviewer databases, and early career researchers can throw their hats into the ring to be considered as referees. Is there something like that for statistical reviewers that you know of, or some other way to identify those in that community who might be interested in reviewing manuscripts? Thank you!
Sorry for the delayed reply, have had a bit going on, but I wanted to come back to this.
I think this is very important. I put out a call on the American Statistical Association message boards earlier this year and received over 30 replies, admittedly from people of variable qualifications or interest (i.e. one gentleman replied and said he would be interested in reviewing 1 paper per year, which wasnāt especially helpful). But I think we must try harder to build a truly representative database of qualified statisticians who agree to be considered as referees for manuscripts. Some individual journals surely have their own pool, but my dream would be a full list curated by the ASA and/or RSS and/or other governing bodies - and furthermore, some way of assigning credit for completed reviews (I think thatās part of the idea of Publons).
The truth is, I think there are many more statisticians lurking about who could be more engaged in peer review but for whatever reason have demurred. One problem is certainly incentives - there may be a feeling that absent compensation or a role on the editorial board (things that promote our careers) the time required to review papers is better spent elsewhere. I donāt feel this way, but Iām sure some of my statistical colleagues do - theyāve already swamped with work and grant deadlines, whatās the benefit for them in reviewing articles in an area outside their clinical specialty that does little to build their CV or promotion file?
If you cannot offer a role on the editorial board and/or compensation for statistical reviewers, an option that I think some journals ought to consider is an āexchangeā program of sorts - if a statistical reviewer completes a certain number of high-quality reviews (say, 12 in one year?) you will agree to publish a statistical editorial or column of their choosing (subject to editorial review and approval, of course). That gives a statistician some incentive to complete reviews for your journal because theyāll get a publication out of it at the end, and lord knows we need more statistical-education pieces in the clinical literature.
The āexchangeā idea is really interesting and Iāve not heard that discussed before. I hope we see some discussion about that.
This makes my mind wander to an opposite approach: crowd-sourcing. What if a journal were to describe the study, experimental design, and statistical analysis in generic terms and get comments from lots of statisticians on a web site? I personally think that the discussion would be interesting, instructional to students and biomedical researchers, and would possibly make paper reviews less subject to the āluck of the reviewer drawā.
I discussed it on Twitter briefly - not just about statistical reviews, but about finding reviewers in general - since economic realities may make it difficult for journals to pay all reviewers, one way to reward high-quality people who review faithfully is some sort of exchange. At first I suggested that reviewing X papers per year (assume that the editorial staff considers them good-quality reviews) should get you 1 āexpedited reviewā for a paper of your choosing. Specifically for the statistician, giving us a platform to publish 1 statistical editorial in a journal for which we review >X papers per year/month would possibly be motivating. We will all naturally gravitate towards things that promote our careers - first author papers, editorial board positions, and money. If the latter two are not an option, maybe giving us a once-a-year platform as a reward for reviewing 12 papers in one year (or maybe 24 - two per month) seems like an attractive option, especially to early-career folks who would benefit from the chance to get a first/sole author paper.
In my journal, I have created a Statistics Advisory Board precisely for this, so that these experts are indeed part of the journalās board (and also, to have a small pool of experts who are both interested and willing to review for the journal), and we also reach out to them yearly to ask them about journal strategy/development. I donāt know if itās enough of an incentive, but we are happy to offer that (in addition to APC discounts). Iām happy to explore other ways of credit as well.
Regarding your other point, Iād be more than happy to offer publication of such statistical discussions (after proper peer review). Iāve consider it, but thought nobody would be willing to write one.
It sounds like you are doing a terrific job, to be honest.
In general, I think three things will attract statisticians to review:
Financial Compensation (obvious reasons)
Editorial Board Membership (we can use this for promotion)
Opportunity to write our own articles as reward for high-quality reviews
As for the last one, if you thought nobody would be willing to write, it will certainly be specific to the individual and some may not want to bother, depending on their career stage and workload. Iāll merely note that at my career stage, getting a couple of extra single-author publications will be valuable in building a file for promotion and tenure, and I suspect many others (in the USA) at the rank of Assistant Professor would feel the same. In a higher position, some folks may feel the additional time to write a piece is not the best use of their effort.