The recent UC/Elsevier fallout has made me interested in evaluating what our idealized or future models of publishing should look like.
Investigators have shared interesting ideas on how we should reform peer review and create an open access publishing environment. This advocacy has been going on for decades and I recall the excitement of journals like PLoS being started, but the variability in quality and interest has only led to the proliferation of more journals and staggering publication numbers each month.
What ideas do people have for creating a more egalitarian system for publishing/peer review that provides sufficient monitoring of quality?
Some principles I would like to see include:
Blinded review (no authorship or institutions displayed for peer reviewers). People may infer, or guess with some certainty, but our default shouldnāt be to provide that information to peer reviewers/editors.
Public sharing of anonymous peer review comments along with articles.
Peer review credits/payments - service provides opportunity to have oneās own work reviewed in a timely fashion. Not sure how feasible financial payments would be. Depends on revenue models.
Open access and pre-prints
Overall better adherence to statistical statements and principles of testing. This would generally mean that our journals need to lead the way in using statistics and causal inference language appropriately.
I know people have shared other interesting modifications to scientific publishing, but I was hoping to see if we could use this forum to organize a feasible model. Appreciate all thoughts.
Journals prioritize publishing novel or headline grabbing studies. This prioritization leads to new associations that frequently lack high quality data or appropriate modeling. Think of all the nutritional studies (I thought you wanted to throw 99% of epi papers in the trash @mike_j?).
Papers frequently get funneled to higher tiered journals based on the authorship profiles rather than the quality of the work. I worry that editors may override or ignore legitimate criticisms. Or that peer reviewers might make overly critical criticisms to keep unfavorable papers out of higher impact journals.
We agree on problems of peer review and I wonder how attributable is this to the manner in which journals across the impact factor spectrum are managed.
Pay walls I also believe are undemocratic for research that is largely publicly funded and recreates knowledge inequities based on who is fortunate to work at an institution that funds library subscription services.
You both are right. There are thousands of low value papers already and more every day. More and more journals worsens this problem.
Finding quality is hard. Many poor quality reviews from peer reviewers who are too busy or not sufficiently careful. In that setting, quality of editors matters a lot. Some have more impartiality than others.
I donāt know that blind peer review is all that practical though. Have you guys done it? I have a few times and it has felt very klunky at times. This from a guy who normally bemoans the problems that double blinded peer review should fix.
I do think peer review contents and all drafts should be made available for accepted papers though. Maybe even, after a while, for rejected papers. But all of that gets messy.
Maybe the scientific publishing world just canāt scale up and maintain quality and accessibility. If so, then what is needed is a massive reduction in the number of published papers ( the old āless research is neededā).
And I think the easiest and quickest way to achieve this is by eliminating scientific publishing requirements on all clinical, teaching, and administrative positions. Boom! More efficient than The Leftoversā Sudden Departure and Thanos combined.
Thereās a large research work force. I donāt expect them to not write up their work or publish. There should be appropriately tiered avenues for describing what someone is doing. Mining poor quality data should not be where our efforts are focused. Probably need to shift to improving data quality overall in clinical medicine research.
Imagine trying to implement double-blind peer review in a world that increasingly will embrace preprints and already makes extensive use of abstracts at conferences. People in a position to review the article competently might have already discussed the work at a poster or after a talk. People also cite their prior work, and there are many other clues as to who did the work, unless you really begin stripping out information about the setting in which the work was done. That could even interfere with effective review in some cases.
My recommendation is to embrace preprints and to create overlay journals. Also, review for overlay journals when asked, even though you wonāt immediately see why you should ā you should.
I have no interest in insuring blindedness like one would in a trial. I think the culture and general approach to blinded review would advocate for evaluating substance while trying to ignore credentials and name recognition.
Teachers often blind themselves when grading even though penmanship might be a dead giveaway.
I believe we should embrace chaos and crowd sourcing by abandoning identified journals, allowing for premature dissemination of information with rapid fact checking, data sharing, talkbacks, post-āpublicationā peer review, etc. A chaotic system would function so much better than what we currently have and would not put any research sector at an economic disadvantage. Some sort of revenue would be needed to run computer systems and to pay for some sort of low-level curation (screening out offensive language, etc.). This might require something on the order of $5 US per article.
The value of journals: they show you what to read. A chaotic system of posting work on a pre-arranged server could/would easily overwhelm anyoneās capacity to stay ācurrentā; practicing docs arenāt going to use this. Iād fear an even greater reliance on guidelines. There is also need to maintain a historical record (although we seemingly ignore data from 10 years ago).
What @f2harrell has said has great appeal, in my opinion.
What @mike_j said is a good retort, though, and heās right that itās already difficult to find curation of high-quality clinical information. People use UpToDate for a reason. It isnāt that itās so wonderful per se, but rather it concisely summarizes the key literature useful for treating patients, the RCTs relevant to a particular problem. I find clinicians often gravitate to UpToDate even if there is a much better source of information ā because itās quick, centralized, and they know what theyāll get.
Impact Factors inheritantly function as a type of aggregator. The largest academic audience will look for what has filtered through NEJM, Nature, Cell, Lancet, etc. And readership narrows down and becomes more specific along the tree of journals.
I think whatever alternative mode we can conceive of requires an aggregator function because people cannot sift through preprint servers. As much as preprints are advocated, no one in the medical fields seems to scan through or promote them. If there really was a strong preprint type environment, maybe something like a Reddit feature could help filter and promote important and high quality work.
I feel fairly resigned to the fact that how journals filter information for the scientific community wonāt change much over the next 50-100 years.
I very much admire what @f2harrell is arguing. I think thereās a lot of truth to it.
Itās true also that aggregation is desired by clinicians in their work, a reality dictated by the sheer volume of decisions they have to make and the +++++ wide spectrum of possible questions they will be asked. Learning how to quickly make oneās way to best evidence is a survival skill in the setting of medical practice.
What weāre really saying is that the literature serves more than one audience. The audience of researchers ā to include everyone on this board, I think ā is much more open to the idea of anarchy. It makes a lot of sense ā we can take the time to vet work relevant to our area, and we can vet it skillfully. But in the sense that a second audience is people needing the study to have already been vetted as a practical matter, anarchy cannot serve them well.
So, @f2harrell & @boback & @mike_j are all correct, but for different audiences. Thus, I conclude there cannot be a complete replacement of the current system, nor can it stay the same as it is.
yes, and fewer people doing phdās would be good. i read a blog post that made the point, but i canāt find it
i donāt like blinded review. itās sometimes possible to discern who the reviewer is and they are insisting you reference their work, even tho itās not relevant. Iād like to see silly things like this cease
-being āwell-likedā has become much more relevant to the likelihood of publishing than it ever was or should be
Thatās a great point. I donāt know that clinical readers of e.g., a large clinical trial who havenāt received additional training in trial design, biostats can rate the studies accurately. But perhaps those who can judge would vote and those who canāt would refrain from voting. The other thing that is interesting as I ponder a rating system is that some findings might be valid + inconvenient, suggesting for example that a complex workup needs to be undertaken much more often than it is to diagnose patients properly. How would that be rated by end users, I wonder?
If a rating system can be implemented well, it has huge advantages (and probably some disadvantages, realistically) over gatekeeping by a few experts.
Iām going to speculate that thereās a failure to distinguish between clinician and trained clinician-scientist or clinician-epidemiologist etc contributing to why it seems that one could have everyone judge and it would be fine. Maybe, but not obvious to me at all that that would work.
One thing to be cautious of in any crowd sourced voting of studies is a tendency to promote articles that act as confirmation bias for oneās views.
Some of you may recall AAFP publishes a ātop 20 articlesā based on voting; they also have one based on most common search queries from their website. The lists are informative, but showcase some limits of this approach without more structure.
My personal opinion is that we should just train more biostatisticians and create a biostats medical specialty to read, interpret, and sort out primary data for the rest of the medical field. I have no idea how to read a stress test, I trust the highly trained cardiologist. Ditto with imaging and radiology, pathology, etc. Sure I many of these things I can also do, but I trust teamwork and expertise if Iām applying to patient care.