I do recall the AAFP top 20 articles list. What’s notable is that it heavily emphasizes articles that suggest physicians no longer have to do X. I can see why that would be popular in a busy practice. But maybe there are things that should be done more often, and I don’t anticipate those things being popular in a survey, voting, or contest mode.
I could see the specialty you’re describing, as long as the people trained are rounding, in clinics, etc, seeing medicine up close & personal to inform their understanding of medical decision-making.
Perhaps this is an opportunity for complex analysis or artificial intelligence. I have no idea how to set this up but perhaps someone has done this in another field. What I’m alluding to is to textually analyze criticisms of an article that are posted alongside the article, with the criticisms weighted by a reputation score of the critics. At first the system would be circular, and reputation scores would need constant updating, changing the initial “rating” of the paper. But there is perhaps the possibility of separating methods quality from clinical usefulness.
The extreme limit of all this, as someone proposed on Twitter a few weeks ago, is to have research teams publish data without commentary, get major authorship credit for generating the data, and letting the world have at the analysis of the data.
Its very possible, even likely that people who are trained are not even in quality and that some who are “untrained” could exceed some of the “trained” in quality. I would suggest that the value of a rating be weighted based on all prior contributions (primary and comments).
This has been a very interesting read with a lot of great suggestions. One thing I have noticed that hasn’t been discussed is the strong incentives for academics to get quantity in publications. CVs padded with a lot of garbage science look better on its face, because you cant actually judge the quality of the articles by the citation (except for the journal its published in, which is no guarantee of quality either). Nevertheless, until incentives to publish quantity over quality are removed, I am pessimistic that many of these great ideas will gain traction.
Please, let’s straighten one thing out now, Frank has (easily) forgotten more than I currently know about statistics. I’m early in my career, just trying to do the best I can with my current skills while occasionally picking up new things when I have the time. I digress, though, but I agree with your overarching point - of course I’m an advocate for “more involvement of statisticians with contextual knowledge/experience in medicine” but that isn’t a silver bullet by itself, and just like “physicians” are not a uniform lot, neither are “statisticians.”
So while I love this idea…
…I have my doubts that this can ever be implemented in real life. I understand that sometimes tossing around pie in the sky theoretical solutions leads to an effective discussion of what can be implemented, and while I appreciate the parallel of a trained researcher interpreting a paper to cardiologist reading a stress test (and think there’s some truth to the parallel) I can’t imagine “trained statisticians will interpret all the papers and sort them out for everyone” is going to fly for one primary reason: who’s going to pay for this job?
I share the skepticism of reader response / comments / votes as an improvement. I’m surprised that Frank likes this idea, given how often we (realistically, a relatively small vocal minority on Twitter) roll our eyes at poor quality statistics/methods embraced and reported in journals. I fear that this would devolve things further into popularity contests of people leaving good comments/votes about things they like and slinging mud at things they don’t like.
That, also, is a critical point. Again, the pool of people who comment here / discuss methods on Twitter is a small fraction (like, small enough that I can rattle off most of us by name…) of the pool of consumers of medical literature, and Mike J reminds us…
So, having worked my way backwards through this thread, what are the suggestions that I’ve seen so far which (IMO) could be implemented sooner rather than later, and in the current environment?
I’ve become a pretty big fan of this idea. I see some of the downsides of revealing the reviewers’ identity (I know that several of us who have occasionally dared to critique articles on Twitter have received threatening phone calls / messages telling us to back off or face consequences), but posting the peer-review comments (anonymized) allow the reader some additional context regarding why the final publication looks the way that it does.
I also very much like the idea of finding ways to encourage involvement in reviewing - I have been part of a couple Twitter threads recently where we bemoan slow review times, but the most common cause of long wait times is difficulty finding 2 qualified reviewers (sometimes editors have to send out a dozen or more invitations to get 2 people to agree). If “just pay the reviewers for their efforts” is not a feasible solution, alternative methods such as Boback’s suggestion here of earned-credits seem a worthwhile alternative to explore. In exchange for XX timely reviews (meeting quality threshold, assessed by the editor), you earn 1 “expedited review” submission, something like that.
This falls into the dreaded bin of things that are “easy to say, makes complete sense, but challenging to actually implement” - for reasons discussed in some of the ensuing posts. If you have the time, I’d encourage you to read Doug Altman’s 1998 “Statistical Reviewing for Medical Journals” - especially his list of 5 troublesome questions at the end.
(i) How much of a purist should the referee be (especially if it is unlikely that the ‘correct’
analysis will alter the findings)?
(ii) How much detail should the referee require of highly complex analyses that would not be
understood by most readers?
(iii) Should the referee take account of the expectation that the authors have no access to
expert statistical help? If so, how?
(iv) How should the referee deal with papers using methods that are (widely) used by other
statisticians but which he/she does not approve of ?
(v) When is a definite error a fatal flaw?
I struggle with some or all of those 5 questions in many of the papers that I review.
I’d also be remiss if I didn’t cast support for Mike T’s comment:
I’ve observed this firsthand - a nontrivial slice of faculty producing these manuscripts seem to scarcely care at all about quality or accuracy of what they publish. As long as their CV is getting longer, things are working fine in their eyes, and I had a reasonably senior academic physician admit directly that the first thing they tend to look at when evaluating a faculty candidate was the raw number of publications the person has (also glancing to see if any of them are in big-name journals, but clearly, the first thing that impressed them was a big number). So (IMO) changing the scientific publication model needs to happen in concert with research universities changing their evaluations of faculty for hiring, promotion and tenure. I’m not sure whether one of these things can fully drive the other…
This is 100% a pie in the sky idea. Even I am not taking it seriously.
But just for fun: The way it could work is to have “paper/data interpretation” billable by licensed professionals, the way current fee for service works for most healthcare. It would add “skin in the game”, i.e. one is paid & held accountable to making valid/consistent reports. I.e. in radiology their is considerable uncertainty, but radiologist do a pretty good job finding agreement or clarifying/hedging on areas of uncertainty. Like any “fee for service”, billing could be done by time or documentation of work. You could then have registries that aggregate the reports and provide meta-commentary.
I suspect the future of scientific publishing will be essentially unchanged from the present of scientific publishing. It might go through various attempts at some sort of utopian solution, but ultimately, for all of its many imperfections, the current system works pretty well and newer systems will, over time, drift back to the old way of things.
(A digression: The real problem isn’t the system per se but, as others have noted, the sheer volume pushed through. This volume must be dealt with at breakneck pace just to stand still, inevitably leading to a fall below the quality we’d all like. The often unacknowledged problem with speed in the publishing process is that people find it hard to accept that journals do anything of value in such a short space of time.)
Don’t get me wrong, there will be refinements:
I particularly like the idea of publishing peer review comments alongside articles. My addition to this suggestion is that all comments are published - ie, including those from other journals that rejected the paper.
Peer reviewers should definitely be paid, although, in my experience, this does little to improve the timeliness or quality of the process.
Open access is not the great democratising solution that it is portrayed. It’s one of those be careful what you wish for situations.
As above, pre-prints are not without complications. They’re already bandied about as if they’re equivalent to peer reviewed work.
The real solution to the open access conundrum is centrally funded journals, but that’s not without its own pitfalls…
**Quality ** is controlled not by anonymous “gurus” but by “the community”. Of course there is much much to be said about this, but that would be outside the present scope.
My main message is that a switch is needed from a closed “expert based” system (“This article is good because it is in a prestigeous Journal and has been reviewed by anymous peer reviewers! We would give you the code or the data, just trust us!”) to an open, meritocratic system that is geared towards cooperation and collaboration.
The present Open Science initiatives move in the right direction!
Added: sorry if my contribution is of a slightly broader nature and doesn’t contain a ready solution. Fortunately many people who are much smarter than me are already working hard on this problem!
I recently watched a video of explaining that from the point of view of someone who had made a piece of software, upkeep was not free, even though the software is freely distributed. Community members would ping him constantly about “why don’t you just add this feature” or “why didn’t you do this” etc.
I’m not sure that teams that conduct experiments can be expected to both share data and code and be involved in community upkeep of those resources–at least without degrading their capacity to do a new project. Certainly, this doesn’t look feasible without funding designated for ongoing support desk functions, anyway.
Perhaps I’ve drifted too far from the thread’s focus on publishing, but having teams that do research support the community’s use of these data & code on an ongoing basis is a new & unfunded burden on teams. It really needs some careful thought.
I’m glad to see this put so articulately, because it’s a concern that I’ve had as well (albeit one that I’d never formulated this clearly in public). I may come back and add more later but thought this comment was worth casting a vote in support.
@byrdjb and @ADAlthousePhD - Those are justified concerns, and these form indeed a problem with open source software.
Any publication / open science system will cost time and money. The good news is: there IS money. At the moment it is spent on expensive journals. Perhaps a part of that can be spent on Open science frameworks and/or other platforms for collaborative research and publication instead.
Key excerpt: “Oh, hello, is this the somebody store? Yes, we would like somebody to take unsolicited advice on the internet. Oh, yes, it’s really mean. Really rough. And yea, no, no one’s gonna say ‘thank you.’ No, yea, it’s unpaid … You don’t have anybody? I was told there would be somebody who would do this?!” Obviously, this is a sort of restatement of the tragedy of the commons.
The troubling thing is that they’re talking about the same issues, but they’re coming at them from the perspective of the provider of information and the end user, and each is beset with different, serious problems that aren’t easy to fix.
Their has been a lot of exploration on different IT/tech solutions to current academic publishing system. Here is a good article that reviews and envisions different possible social platforms for peer review: