The Future of Scientific Publishing

I do recall the AAFP top 20 articles list. What’s notable is that it heavily emphasizes articles that suggest physicians no longer have to do X. I can see why that would be popular in a busy practice. But maybe there are things that should be done more often, and I don’t anticipate those things being popular in a survey, voting, or contest mode.

I could see the specialty you’re describing, as long as the people trained are rounding, in clinics, etc, seeing medicine up close & personal to inform their understanding of medical decision-making.

2 Likes

Perhaps this is an opportunity for complex analysis or artificial intelligence. I have no idea how to set this up but perhaps someone has done this in another field. What I’m alluding to is to textually analyze criticisms of an article that are posted alongside the article, with the criticisms weighted by a reputation score of the critics. At first the system would be circular, and reputation scores would need constant updating, changing the initial “rating” of the paper. But there is perhaps the possibility of separating methods quality from clinical usefulness.

The extreme limit of all this, as someone proposed on Twitter a few weeks ago, is to have research teams publish data without commentary, get major authorship credit for generating the data, and letting the world have at the analysis of the data.

3 Likes

The clinical importance of UpToDate is also now cemented as the ABIM recert knowledge check-in uses it and favors its preferred answers extensively.

1 Like

Its very possible, even likely that people who are trained are not even in quality and that some who are “untrained” could exceed some of the “trained” in quality. I would suggest that the value of a rating be weighted based on all prior contributions (primary and comments).

Biostatisticians are a very valuable resource but also vary in quality and knowledge. Not all are @f2harrell or @ADAlthousePhD

We definitely need more MDs with formal PhD level quantitative skills. There should be a serious effort to encourage this in MSTP programs.

3 Likes

This has been a very interesting read with a lot of great suggestions. One thing I have noticed that hasn’t been discussed is the strong incentives for academics to get quantity in publications. CVs padded with a lot of garbage science look better on its face, because you cant actually judge the quality of the articles by the citation (except for the journal its published in, which is no guarantee of quality either). Nevertheless, until incentives to publish quantity over quality are removed, I am pessimistic that many of these great ideas will gain traction.

3 Likes

Hoo boy. Much to comment upon here.

Please, let’s straighten one thing out now, Frank has (easily) forgotten more than I currently know about statistics. I’m early in my career, just trying to do the best I can with my current skills while occasionally picking up new things when I have the time. I digress, though, but I agree with your overarching point - of course I’m an advocate for “more involvement of statisticians with contextual knowledge/experience in medicine” but that isn’t a silver bullet by itself, and just like “physicians” are not a uniform lot, neither are “statisticians.”

So while I love this idea…

…I have my doubts that this can ever be implemented in real life. I understand that sometimes tossing around pie in the sky theoretical solutions leads to an effective discussion of what can be implemented, and while I appreciate the parallel of a trained researcher interpreting a paper to cardiologist reading a stress test (and think there’s some truth to the parallel) I can’t imagine “trained statisticians will interpret all the papers and sort them out for everyone” is going to fly for one primary reason: who’s going to pay for this job?

Moving on…

I share the skepticism of reader response / comments / votes as an improvement. I’m surprised that Frank likes this idea, given how often we (realistically, a relatively small vocal minority on Twitter) roll our eyes at poor quality statistics/methods embraced and reported in journals. I fear that this would devolve things further into popularity contests of people leaving good comments/votes about things they like and slinging mud at things they don’t like.

That, also, is a critical point. Again, the pool of people who comment here / discuss methods on Twitter is a small fraction (like, small enough that I can rattle off most of us by name…) of the pool of consumers of medical literature, and Mike J reminds us…

So, having worked my way backwards through this thread, what are the suggestions that I’ve seen so far which (IMO) could be implemented sooner rather than later, and in the current environment?

I’ve become a pretty big fan of this idea. I see some of the downsides of revealing the reviewers’ identity (I know that several of us who have occasionally dared to critique articles on Twitter have received threatening phone calls / messages telling us to back off or face consequences), but posting the peer-review comments (anonymized) allow the reader some additional context regarding why the final publication looks the way that it does.

I also very much like the idea of finding ways to encourage involvement in reviewing - I have been part of a couple Twitter threads recently where we bemoan slow review times, but the most common cause of long wait times is difficulty finding 2 qualified reviewers (sometimes editors have to send out a dozen or more invitations to get 2 people to agree). If “just pay the reviewers for their efforts” is not a feasible solution, alternative methods such as Boback’s suggestion here of earned-credits seem a worthwhile alternative to explore. In exchange for XX timely reviews (meeting quality threshold, assessed by the editor), you earn 1 “expedited review” submission, something like that.

This falls into the dreaded bin of things that are “easy to say, makes complete sense, but challenging to actually implement” - for reasons discussed in some of the ensuing posts. If you have the time, I’d encourage you to read Doug Altman’s 1998 “Statistical Reviewing for Medical Journals” - especially his list of 5 troublesome questions at the end.

(i) How much of a purist should the referee be (especially if it is unlikely that the ‘correct’
analysis will alter the findings)?
(ii) How much detail should the referee require of highly complex analyses that would not be
understood by most readers?
(iii) Should the referee take account of the expectation that the authors have no access to
expert statistical help? If so, how?
(iv) How should the referee deal with papers using methods that are (widely) used by other
statisticians but which he/she does not approve of ?
(v) When is a definite error a fatal flaw?

I struggle with some or all of those 5 questions in many of the papers that I review.

I’d also be remiss if I didn’t cast support for Mike T’s comment:

I’ve observed this firsthand - a nontrivial slice of faculty producing these manuscripts seem to scarcely care at all about quality or accuracy of what they publish. As long as their CV is getting longer, things are working fine in their eyes, and I had a reasonably senior academic physician admit directly that the first thing they tend to look at when evaluating a faculty candidate was the raw number of publications the person has (also glancing to see if any of them are in big-name journals, but clearly, the first thing that impressed them was a big number). So (IMO) changing the scientific publication model needs to happen in concert with research universities changing their evaluations of faculty for hiring, promotion and tenure. I’m not sure whether one of these things can fully drive the other…

5 Likes

This is 100% a pie in the sky idea. Even I am not taking it seriously.

But just for fun: The way it could work is to have “paper/data interpretation” billable by licensed professionals, the way current fee for service works for most healthcare. It would add “skin in the game”, i.e. one is paid & held accountable to making valid/consistent reports. I.e. in radiology their is considerable uncertainty, but radiologist do a pretty good job finding agreement or clarifying/hedging on areas of uncertainty. Like any “fee for service”, billing could be done by time or documentation of work. You could then have registries that aggregate the reports and provide meta-commentary.

scite is post-publication but along those lines? it attempts to quantify whther the literature supports or contradicts a given paper. it’s in beta testing

“…potential for overfitting and rushed analyses.”

I suspect the future of scientific publishing will be essentially unchanged from the present of scientific publishing. It might go through various attempts at some sort of utopian solution, but ultimately, for all of its many imperfections, the current system works pretty well and newer systems will, over time, drift back to the old way of things.

(A digression: The real problem isn’t the system per se but, as others have noted, the sheer volume pushed through. This volume must be dealt with at breakneck pace just to stand still, inevitably leading to a fall below the quality we’d all like. The often unacknowledged problem with speed in the publishing process is that people find it hard to accept that journals do anything of value in such a short space of time.)

Don’t get me wrong, there will be refinements:

  • I particularly like the idea of publishing peer review comments alongside articles. My addition to this suggestion is that all comments are published - ie, including those from other journals that rejected the paper.

  • Peer reviewers should definitely be paid, although, in my experience, this does little to improve the timeliness or quality of the process.

  • Open access is not the great democratising solution that it is portrayed. It’s one of those be careful what you wish for situations.

  • As above, pre-prints are not without complications. They’re already bandied about as if they’re equivalent to peer reviewed work.

The real solution to the open access conundrum is centrally funded journals, but that’s not without its own pitfalls…

(Obviously I have a conflict of interest)

2 Likes

+1. The “publish” model may be good for religious texts, where some revealed truth is written down once and remains valid for ever.

But scientific knowlegde isn’t carved in stone or in golden plates. I’d like a model, a technology more like the open source software has:

  • with ‘roadmaps’ that contain the medium term plans: what methods will be used? what will NOT be done (in software called “non-features”)
  • development is open to outside contributions, that the core team may or may not adopt
  • outsiders are free to “fork” the project if they have different plans
  • of course all code & data is available for inspection, for your own use, etc.
  • projects may contain links to related (or forked) projects

Here’s how “issues” look in the Inkscape software:

You can also see what is being done with those remarks/ criticism. You see some issues are still open

Merge requests, also interesting. Here are solutions that are proposed to the core team. Some of those are adopted (“merged”) in the software.

**Quality ** is controlled not by anonymous “gurus” but by “the community”. Of course there is much much to be said about this, but that would be outside the present scope.

My main message is that a switch is needed from a closed “expert based” system (“This article is good because it is in a prestigeous Journal and has been reviewed by anymous peer reviewers! We would give you the code or the data, just trust us!”) to an open, meritocratic system that is geared towards cooperation and collaboration.

The present Open Science initiatives move in the right direction!

Added: sorry if my contribution is of a slightly broader nature and doesn’t contain a ready solution. Fortunately many people who are much smarter than me are already working hard on this problem!

2 Likes

Interesting ideas, @pieter.

I recently watched a video of explaining that from the point of view of someone who had made a piece of software, upkeep was not free, even though the software is freely distributed. Community members would ping him constantly about “why don’t you just add this feature” or “why didn’t you do this” etc.

I’m not sure that teams that conduct experiments can be expected to both share data and code and be involved in community upkeep of those resources–at least without degrading their capacity to do a new project. Certainly, this doesn’t look feasible without funding designated for ongoing support desk functions, anyway.

Perhaps I’ve drifted too far from the thread’s focus on publishing, but having teams that do research support the community’s use of these data & code on an ongoing basis is a new & unfunded burden on teams. It really needs some careful thought.

1 Like

I’m glad to see this put so articulately, because it’s a concern that I’ve had as well (albeit one that I’d never formulated this clearly in public). I may come back and add more later but thought this comment was worth casting a vote in support.

4 Likes

Anyone recall the Twitter thread where @venkmurthy or @f2harrell came up with a weighting system to value peer review work for prioritized evaluation of one’s own work. There was some sort of index.

2 Likes

@byrdjb and @ADAlthousePhD - Those are justified concerns, and these form indeed a problem with open source software.

Any publication / open science system will cost time and money. The good news is: there IS money. At the moment it is spent on expensive journals. Perhaps a part of that can be spent on Open science frameworks and/or other platforms for collaborative research and publication instead.

2 Likes

That was not me! Perhaps @f2harrell?

@pieter and @ADAlthousePHD Here is the talk I mentioned. It’s well worth watching in my opinion https://www.youtube.com/watch?v=o_4EX4dPppA.

Key excerpt: “Oh, hello, is this the somebody store? Yes, we would like somebody to take unsolicited advice on the internet. Oh, yes, it’s really mean. Really rough. And yea, no, no one’s gonna say ‘thank you.’ No, yea, it’s unpaid … You don’t have anybody? I was told there would be somebody who would do this?!” Obviously, this is a sort of restatement of the tragedy of the commons.

Pairs well with https://www.youtube.com/watch?v=N2zK3sAtr-4&feature=youtu.be, which is hilarious / scary / true.

The troubling thing is that they’re talking about the same issues, but they’re coming at them from the perspective of the provider of information and the end user, and each is beset with different, serious problems that aren’t easy to fix.

2 Likes

A mandatory preprint will be resisted by those proprietary rights claimants.

1 Like

For anyone interested here is a list open science ecosystems and projects.

https://hackmd.io/c/r11YTzX9f/%2FQTaG8S3LQAeBfnCT-EFFfQ

Their has been a lot of exploration on different IT/tech solutions to current academic publishing system. Here is a good article that reviews and envisions different possible social platforms for peer review:

https://f1000research.com/articles/6-1151/v1

Their is obviously much more out there, but I wanted to provide some references for anyone interested in the technical side.