I already have an editorial position that takes up the time I can allot for reviewing papers, but I will mention this to a few of my fellow statisticians to see if any of them are interested.
Thank you very much!
Despite the consensus, I thought this reference should be of interest:
i haven’t read this yet, but i like the contentious nature of it: Is Peer Review a Good Idea? … may share my thoughts after going thru it
Essential to the quality of the medical research literature. But it’s a thankless task, under-resourced. ALL medical literature? Several decades ago - before I had any substantial experience of refereeing papers - someone in my university asked whether I’d be willing to join an editorial board. On enquiry I found that the journal in question was on anatomy. My first reaction was, surely papers on anatomy don’t require statistical review, do they? On looking at a few issues, I was less convinced. Think of the research question ‘How many segments does an orange have?’. The sort of paper that got published was the kind that simply said ‘Nine’, without any consideration that the number could well vary between different fruits / individuals.
Not sure this is the right place for this, but here it goes anyway…
Is there a journal where we can publish stuff like ‘Letter to the Editor’ or ‘Correspondence’ addressing statistical issues in published articles? Ideally this material should be submitted to the journal where the original article was published. But what if the journal refuses to publish this?
In theory, accepting such articles (‘Letter to the Editor’ or ‘Correspondence’) is an attestation that the journal did not perform an adequate review of the published paper, which does not look good for the editorial board. I believe this is the main reason why this kind of stuff does not appear much in the literature, especially in the highest ranked medical journals. And there are several statistical issues (not to say gross errors) in lots of articles from these journals.
Any thoughts?
Twice I have posted statistical concerns about papers directly at PubPeer; in one case, the authors deigned to respond, and there was some valuable back-and-forth discussion there. Also if you have concerns about a paper, look it up on PubPeer to see if anyone else has raised similar concerns already.
Pubpeer does look like the best approach at present. Sometimes we have to resort to shaming on social media when authors just don’t get it, like the group of surgeons who ‘discovered’ post hoc power calculations and wanted the world to adopt that in interpreting all studies.
Will try PubPeer then.
Thanks for the reply, @ChristopherTong and @f2harrell!
How are the authors of the paper informed about the PubPeer posting?
Right after submitting your post, you are prompted to provide author emails. So it pays to have those handy. Also, be aware you can edit your post up until it receives its first reply. (I made a typo in one post that weighed on my conscience for years until I realized there was an edit button.)
As a statistician, I have been in the situation where I was a co-author, and we submitted to a Q2 journal. The (clinical) reviewer asked we do stats atrocities, even though I explained to them why it is a bad idea (with references). The reviewer insisted (e.g. literally “please show me p-values for the normality tests”) without any rationale or counterarguments. I had to abide so as not to waste everybody’s time.
On the other hand, I have also been in the situation where authors reach out to me because their paper got major comments from the statistical reviewer of a Q1 journal. They had done bad statistical practice having not consulted a statistician in the design or analysis a priori.
I believe these anecdotes exemplify the good things a statistical reviewer would bring to the quality of papers in a journal.
What does this say about “peer review” when incorrect statistical analysis is enforced by subject matter experts? The more I read, the less I am able to just “trust the science.” I’m more partial to the original slogan of the Royal Society: “Nullius in verba” – roughly meaning “Take no ones word for it.”
This attitude of distrust is especially important now, when large language models are able to confabulate plausible sentences that have no basis in reality.
I think the entire notion of “peer review”, even if done by statisticians in their domain of expertise, needs to be done away with. I agree with Harry D. Crane (statistician from Rutgers) and Ryan Martin (statistician from UNC) that “peer review” does more harm than good.
Harry Crane and Ryan Martin (2018). In peer review we (don’t) trust: How peer review’s filtering poses a systemic risk to science. Researchers.One link
Is it feasible for you to do as asked but to also include commentary on why it’s not useful or informative?
A related issue is the “require scientific review” dilemma of IRB’s. See this reviewer annotated review of our paper whose annotated comments are steeped in junk science. Our paper is referring to excessive or unnecessary IRB oversight justified by junk science (i.e., weak or misleading research that overstates risks, leading to burdensome ethics reviews for low-risk studies). This is a real problem in academia, where overzealous regulation stifles useful research without meaningful ethical benefit.
Examples of Junk Science Used to Justify Overregulation
1. Exaggerated “Psychological Harm” in Minimal-Risk Studies
- Example: A survey study asking about dietary habits is subjected to full IRB review because a single, poorly designed paper claimed that “any personal questions could cause trauma.”
- Why Junk? No empirical evidence shows that benign surveys cause harm, yet IRBs cite outlier studies to justify restrictive reviews.
2. Misapplying High-Risk Paradigms to Low-Risk Research
- Example: A study on workplace productivity is forced into a clinical trial-style consent process because of a flawed meta-analysis claiming “all human subjects research carries non-negligible risk.”
- Why Junk? Conflating observational studies with clinical interventions leads to unnecessary bureaucracy.
3. Overgeneralizing Rare Adverse Events
- Example: An IRB requires extensive safeguards for a study on music preferences because one case report (n=1) claimed a participant “felt distressed” after hearing a song they disliked.
- Why Junk? Anecdotes ≠ evidence, yet they’re used to justify excessive restrictions.
4. “Precautionary Principle” Without Evidence
- Example: IRBs reject or delay archival research (using existing public data) because a speculative paper argued that “data re-identification risks are always possible,” despite near-zero real-world cases.
- Why Junk? Fear-based policy without probabilistic risk assessment.
5. Misinterpreted Neuroethics Scaremongering
- Example: Basic fMRI studies are subjected to invasive consent forms because of junk science claiming “brain scans can read your thoughts,” despite no such capability existing.
- Why Junk? Misrepresents technology to justify unnecessary oversight.
6. Overstating Risks in Educational Research
- Example: A study on math teaching methods is classified as “greater than minimal risk” because a dubious paper claimed “testing new pedagogies could harm students’ self-esteem.”
- Why Junk? No causal evidence, yet it triggers burdensome IRB hurdles.
7. “Ethical Inflation” from Flawed Systematic Reviews
- Example: A weak meta-analysis claims that “all research with vulnerable populations (e.g., college students) requires special protections,” leading to IRBs overregulating even anonymous surveys.
- Why Junk? Poor methodology (e.g., cherry-picked studies) drives overregulation.
Why This Matters
- Slows Down Science: Researchers waste time on unnecessary ethics reviews.
- Creates Bureaucratic Bloat: IRBs become gatekeepers for negligible-risk studies.
- Discourages Innovation: Fear of overregulation leads to avoidance of useful research.
Solutions
- Risk-Proportionate Review: Streamline approvals for truly minimal-risk studies.
- Evidence-Based IRB Policies: Require robust science (not speculation) to justify restrictions.
- Exemptions for Public Data & Non-Interventional Studies: Stop treating surveys like clinical trials.
