The Future of Scientific Publishing

The recent UC/Elsevier fallout has made me interested in evaluating what our idealized or future models of publishing should look like.

Investigators have shared interesting ideas on how we should reform peer review and create an open access publishing environment. This advocacy has been going on for decades and I recall the excitement of journals like PLoS being started, but the variability in quality and interest has only led to the proliferation of more journals and staggering publication numbers each month.

What ideas do people have for creating a more egalitarian system for publishing/peer review that provides sufficient monitoring of quality?

Some principles I would like to see include:

  1. Blinded review (no authorship or institutions displayed for peer reviewers). People may infer, or guess with some certainty, but our default shouldnā€™t be to provide that information to peer reviewers/editors.
  2. Public sharing of anonymous peer review comments along with articles.
  3. Peer review credits/payments - service provides opportunity to have oneā€™s own work reviewed in a timely fashion. Not sure how feasible financial payments would be. Depends on revenue models.
  4. Open access and pre-prints
  5. Overall better adherence to statistical statements and principles of testing. This would generally mean that our journals need to lead the way in using statistics and causal inference language appropriately.

I know people have shared other interesting modifications to scientific publishing, but I was hoping to see if we could use this forum to organize a feasible model. Appreciate all thoughts.

7 Likes

As per normal, the first question is what is the problem to solve (donā€™t think there is 1):

  1. My gestalt says the foundational problem is the proliferation of minimal value publications
  2. Secondary problems include inadequate peer review prior to publication
  3. and lack of access to information (pay walls)
3 Likes

Problems:

  1. Journals prioritize publishing novel or headline grabbing studies. This prioritization leads to new associations that frequently lack high quality data or appropriate modeling. Think of all the nutritional studies (I thought you wanted to throw 99% of epi papers in the trash @mike_j?).
  2. Papers frequently get funneled to higher tiered journals based on the authorship profiles rather than the quality of the work. I worry that editors may override or ignore legitimate criticisms. Or that peer reviewers might make overly critical criticisms to keep unfavorable papers out of higher impact journals.

We agree on problems of peer review and I wonder how attributable is this to the manner in which journals across the impact factor spectrum are managed.

Pay walls I also believe are undemocratic for research that is largely publicly funded and recreates knowledge inequities based on who is fortunate to work at an institution that funds library subscription services.

4 Likes

You both are right. There are thousands of low value papers already and more every day. More and more journals worsens this problem.

Finding quality is hard. Many poor quality reviews from peer reviewers who are too busy or not sufficiently careful. In that setting, quality of editors matters a lot. Some have more impartiality than others.

I donā€™t know that blind peer review is all that practical though. Have you guys done it? I have a few times and it has felt very klunky at times. This from a guy who normally bemoans the problems that double blinded peer review should fix.

I do think peer review contents and all drafts should be made available for accepted papers though. Maybe even, after a while, for rejected papers. But all of that gets messy.

A mandatory preprint phase might help.

3 Likes

Maybe the scientific publishing world just canā€™t scale up and maintain quality and accessibility. If so, then what is needed is a massive reduction in the number of published papers ( the old ā€œless research is neededā€).

And I think the easiest and quickest way to achieve this is by eliminating scientific publishing requirements on all clinical, teaching, and administrative positions. Boom! More efficient than The Leftoversā€™ Sudden Departure and Thanos combined.

3 Likes

Thereā€™s a large research work force. I donā€™t expect them to not write up their work or publish. There should be appropriately tiered avenues for describing what someone is doing. Mining poor quality data should not be where our efforts are focused. Probably need to shift to improving data quality overall in clinical medicine research.

Even more thumb twiddling otherwise.

2 Likes

Imagine trying to implement double-blind peer review in a world that increasingly will embrace preprints and already makes extensive use of abstracts at conferences. People in a position to review the article competently might have already discussed the work at a poster or after a talk. People also cite their prior work, and there are many other clues as to who did the work, unless you really begin stripping out information about the setting in which the work was done. That could even interfere with effective review in some cases.

My recommendation is to embrace preprints and to create overlay journals. Also, review for overlay journals when asked, even though you wonā€™t immediately see why you should ā€“ you should.

My $0.02

Good topic!

2 Likes

I have no interest in insuring blindedness like one would in a trial. I think the culture and general approach to blinded review would advocate for evaluating substance while trying to ignore credentials and name recognition.

Teachers often blind themselves when grading even though penmanship might be a dead giveaway.

3 Likes

I believe we should embrace chaos and crowd sourcing by abandoning identified journals, allowing for premature dissemination of information with rapid fact checking, data sharing, talkbacks, post-ā€œpublicationā€ peer review, etc. A chaotic system would function so much better than what we currently have and would not put any research sector at an economic disadvantage. Some sort of revenue would be needed to run computer systems and to pay for some sort of low-level curation (screening out offensive language, etc.). This might require something on the order of $5 US per article.

8 Likes

The value of journals: they show you what to read. A chaotic system of posting work on a pre-arranged server could/would easily overwhelm anyoneā€™s capacity to stay ā€œcurrentā€; practicing docs arenā€™t going to use this. Iā€™d fear an even greater reliance on guidelines. There is also need to maintain a historical record (although we seemingly ignore data from 10 years ago).

4 Likes

What @f2harrell has said has great appeal, in my opinion.

What @mike_j said is a good retort, though, and heā€™s right that itā€™s already difficult to find curation of high-quality clinical information. People use UpToDate for a reason. It isnā€™t that itā€™s so wonderful per se, but rather it concisely summarizes the key literature useful for treating patients, the RCTs relevant to a particular problem. I find clinicians often gravitate to UpToDate even if there is a much better source of information ā€“ because itā€™s quick, centralized, and they know what theyā€™ll get.

I learned the other day that even basic scientists might be seeking better-curated clinical information to help them in planning their work, cf https://twitter.com/Caroline_Bartma/status/1101581775208366082

2 Likes

@f2harrell I didnā€™t take you for an anarachist :wink:

4 Likes

Impact Factors inheritantly function as a type of aggregator. The largest academic audience will look for what has filtered through NEJM, Nature, Cell, Lancet, etc. And readership narrows down and becomes more specific along the tree of journals.

I think whatever alternative mode we can conceive of requires an aggregator function because people cannot sift through preprint servers. As much as preprints are advocated, no one in the medical fields seems to scan through or promote them. If there really was a strong preprint type environment, maybe something like a Reddit feature could help filter and promote important and high quality work.

I feel fairly resigned to the fact that how journals filter information for the scientific community wonā€™t change much over the next 50-100 years.

3 Likes

I very much admire what @f2harrell is arguing. I think thereā€™s a lot of truth to it.

Itā€™s true also that aggregation is desired by clinicians in their work, a reality dictated by the sheer volume of decisions they have to make and the +++++ wide spectrum of possible questions they will be asked. Learning how to quickly make oneā€™s way to best evidence is a survival skill in the setting of medical practice.

What weā€™re really saying is that the literature serves more than one audience. The audience of researchers ā€“ to include everyone on this board, I think ā€“ is much more open to the idea of anarchy. It makes a lot of sense ā€“ we can take the time to vet work relevant to our area, and we can vet it skillfully. But in the sense that a second audience is people needing the study to have already been vetted as a practical matter, anarchy cannot serve them well.

So, @f2harrell & @boback & @mike_j are all correct, but for different audiences. Thus, I conclude there cannot be a complete replacement of the current system, nor can it stay the same as it is.

3 Likes

yes, and fewer people doing phdā€™s would be good. i read a blog post that made the point, but i canā€™t find it

i donā€™t like blinded review. itā€™s sometimes possible to discern who the reviewer is and they are insisting you reference their work, even tho itā€™s not relevant. Iā€™d like to see silly things like this cease

-being ā€˜well-likedā€™ has become much more relevant to the likelihood of publishing than it ever was or should be

4 Likes

true, but why has it worked outside medicine eg psychology. i read their pre-prints and iā€™m not even in the field

1 Like

i love what researchers.one is doing. i would publish there if sole author

If there is a way to use reader response to rank the veracity and usefulness of research papers I think that we could do without aggregation.

2 Likes

Thatā€™s a great point. I donā€™t know that clinical readers of e.g., a large clinical trial who havenā€™t received additional training in trial design, biostats can rate the studies accurately. But perhaps those who can judge would vote and those who canā€™t would refrain from voting. The other thing that is interesting as I ponder a rating system is that some findings might be valid + inconvenient, suggesting for example that a complex workup needs to be undertaken much more often than it is to diagnose patients properly. How would that be rated by end users, I wonder?

If a rating system can be implemented well, it has huge advantages (and probably some disadvantages, realistically) over gatekeeping by a few experts.

Iā€™m going to speculate that thereā€™s a failure to distinguish between clinician and trained clinician-scientist or clinician-epidemiologist etc contributing to why it seems that one could have everyone judge and it would be fine. Maybe, but not obvious to me at all that that would work.

1 Like

One thing to be cautious of in any crowd sourced voting of studies is a tendency to promote articles that act as confirmation bias for oneā€™s views.

Some of you may recall AAFP publishes a ā€œtop 20 articlesā€ based on voting; they also have one based on most common search queries from their website. The lists are informative, but showcase some limits of this approach without more structure.

My personal opinion is that we should just train more biostatisticians and create a biostats medical specialty to read, interpret, and sort out primary data for the rest of the medical field. I have no idea how to read a stress test, I trust the highly trained cardiologist. Ditto with imaging and radiology, pathology, etc. Sure I many of these things I can also do, but I trust teamwork and expertise if Iā€™m applying to patient care.

3 Likes