The Future of Scientific Publishing

The Github software model has an answer for this. Open up the project for new authors, or allow other research teams to make a “branch” of the project that the first author may design to merge into the main research “trunk” later.

1 Like

Haven’t watched the whole first one yet, but the second one is hilarious but sadly true.

1 Like

Good points. My experience with github suggests that people can & will still ping the original team with requests for support. That takes time, which in the case of researchers is typically already under contract to do novel research with no funds for ongoing support of completed projects. The talk I link above by the designer of the elm language is quite illuminating regarding the challenges, even for a person who is completely immersed in the technologies and platforms of collaboration, as he is.

2 Likes

not sure i share that concern. With the ‘observed power’ issue that you and Zad were dicussing on here, the surgeons defended (https://insights.ovid.com/pubmed?pmid=29979247), but it is not made clear enough to them that their position is a minority one; they publish their rejoinder and that is the end of that. I often hear clinicians say “but they did it in nejm/jama etc” as if these journals are infallible, and bad habits are perpetuated. It would be great if it was made explicit, publicly, just how contentious some of these things are… one letter objecting and one letter responding gives a false impression and gives a sense that the matter has been dealt with

2 Likes

The issue of letters to the editor that you point out is a good one. The message is ultimately controlled by the editors and the authors, whether right or wrong. That needs a fix, but the solution isn’t obvious. I do believe a reputation based scoring could actually work. In real life we use reputation a lot, even though it is flawed. A digital system is unlikely to be perfect but perhaps perfect is not achievable?

There’s a Black Mirror episode about reputation scores. All metrics become gamed if not constructed carefully.

2 Likes

All metrics become gamed, period. FTFY

3 Likes

That’s an interesting point, Paul. Surgeons publish a well-intentioned article about Type 2 Error but accompanied with some dumb comments about post-hoc power using the observed effect size; two or three statisticians saw/heard and submitted replies explaining that post-hoc power using the observed effect size is dumb because it’s monotonically related to the p-value, which the journal (to their credit!) published, along with another authors’ reply that doubled down on their original position with some incredibly naive comments like “We would point out that being redundant is not the same as being wrong” (which @zad and I have since replied to…and the journal recently accepted our reply, with yet another indication that they will permit the authors to reply again, so we’re now going to be at the reply-to-the-reply-to-the-reply-to-the-reply…)

You make an interesting point that since they’re only getting one or two piecemeal replies via letter to the editor in the journal, they may not realize just how flawed their position is, and perhaps it would be made more clear if there were a few dozen statisticians voting/comment upon the article. Perhaps that is true; from what I’ve seen on Twitter, though, I worry that with particularly influential physicians, an equal or greater number of their minions (surgery residents, people that trained under them, people that just think they’re never wrong, etc - generally, people with even less qualification to discuss the matter) would show up to vote “in support” because they also don’t understand the issue well enough to see the problem, they just know their attending surgeon wrote this cool article about something called “post hoc power” where they showed up the pencil-necked geek statisticians…

4 Likes

Pencil necked and @ADAlthousePhD in the same sentence sure is funny!

I want to mention an important component of the success of stackoverflow.com and stats.stackexchange.com. People who add comments and answers (and pose good questions) get reputation points from the whole set of responses they have provided on the site, and upvotes that come from those. So when someone responds to a new posting, readers can see their reputation points and give weight accordingly. That doesn’t mean the response is competent or accurate, but does mean the respondent has been seen as helpful in previous interactions.

4 Likes

This is a great point. That system works really well I suspect because those sites mostly are about very focused questions/problem solving.

In a site more broadly dedicated to publication and pre/post publication peer review, there will be plenty of folks who rate up/down due to biases and friendships too, I worry. For example, a paper which shows a condition can be managed equally well medically vs. surgically is likely to get more dings by surgeons related to cross-over issues than non-surgeons (and vice versa when the shoe is on the other foot).

3 Likes

How about articles are only rated by public readers during a period where intro and methods and censored results are presented? Then the results are revealed during staged publishing at a later point.

3 Likes

Great idea, but I worry about practicality. One of the limitations of letters to the editor at many journals are a very limited window in which they are accepted. Finding a good balance between sufficient blinding period and delaying result dissemination is tough.

While post hoc critiques can be problematic, sometimes the numbers themselves hint at the underlying problems in ways that would be non-obvious without them.

Yeah, tough to coordinate. But perhaps if there was a way to coordinate clinicaltrials.gov submissions with journal submissions that would be create a system where publication isn’t based on results. Basically for a clinical trial, the journal article should be submitted during initiation/recruitment and prior to adjudicated results. Perhaps, this could be done with observational studies too, instead of STROBE statements submitted post-results. Anyway, we are long-way from conceiving of such a model. Unless an already powerful journal required this type of approach to handling publication bias. “Manscript accepted pending a priori result reporting.”

2 Likes

Seems like you are planning 2.0 or 3.0 version!

Ha, yeah. Coming up with a long-term vision and figuring out incrementally how to get there. Biggest movement would come from currently high IF journals changing their approach in response to directed criticism. Some journals dropping p-values from table 1 took decades apparently. Others still don’t care or seem to be unaware of poor statistical practices.

1 Like

Has there been much written about why PLoS hasn’t really made a considerable dent in scientific publishing. Does the author fee model stifle submissions to only those that are well funded? If I recall there was a recent paper/abstract describing how better-funded institutions/researchers publish in open access journals more than traditional journals.

I know there’s been quality concerns, but I’m not sure why.

How has PLoS performed poorly and how would an alternative platform/journal perform better?

2 Likes

There is a new platform developed by statistician Harry Crane called Researchers.ONE that might be of interest. He has an interesting paper on how to use betting on replication to weed out unverifiable results.

Harry Crane: The Fundamental Principle of Probability

1 Like