Randomized non-comparative trials (RNCTs) are becoming increasingly more popular, particularly in oncology, and being published in prominent clinical journals. The idea is to randomize between two or more treatment arms and then not compare them with each other but instead compare each arm with historical controls or prespecified values. Essentially, RNCTs act as single-arm trials for each treatment group. Convenience sampling is used, similarly to standard comparative randomized trials, so there is no additional control of the sampling process to facilitate estimation of group-specific uncertainty estimates (e.g., confidence intervals for survival outcomes in each treatment group).
It is also unclear (at least to me) what is the use of the random treatment assignment? In standard randomized trials the random treatment assignment facilitates estimation of the uncertainty estimates for comparative between-group estimands. However, by their very nature, RNCTs do not prespecify any comparisons and often avoid showing, e.g., hazard ratios and their 95% CI although authors and editors may be tempted to show them if they are “positive” (i.e., “statistically significant”).
We recently performed a scoping review of RNCTs and found that this design is indeed being increasingly used with half of RNCTs reporting comparative between-group results, accompanied by significance testing in almost one-third of trials. Although we tried to steelman this design, it is hard to see what advantage is gained by random treatment assignment if we are not going to compare the treatments? Also, if we are going to analyze them as single-arm studies, why not increase instead the sample size for the arm of interest and/or control the sampling procedure to make sure our sample is comparable, e.g., to the historical controls we will use?
The best argument I can think of for using RNCTs is that perhaps if there is equipoise it may make ethical sense to randomize the treatment assignment rather than exposing our patient to the therapy of their (or their physician’s) preference? This argument does not feel right. Would appreciate thoughts by the members of this forum on whether there is a place for RNCT designs in practice? Or are they indeed based on a fundamental conceptual error and should be strongly discouraged?
6 Likes
Is there a place for RNCT designs in practice? As I understand it, it seems like signal searching?
For efficacy trials, I can’t see a justification. Possibly RNCT design could be useful for indication having no effective treatment? But here too a control can be used - and in my view should be used: best supportive care with quality of life as the primary endpoint for comparison.
1 Like
I am so glad to see us alerted to the increasing use of this disastrous design. One symptom of bad thinking is that most examples do not even adjust for simple drift in age or disease severity through covariate adjustment when comparing to historical data. To me the only cogent way to deal with this is to explicitly model bias due to non-concurrency non-randomization as done here.
3 Likes
Is there some canonical set of methodological references on RNCT’s, Pavlos? Or is this more an accidental development? Who are the statisticians supporting these designs?
1 Like
Bit of shameless self-promotion: I wrote a bit about this issue here
4 Likes
To answer the question: I think this type of “trial” is an absolutely terrible idea. It’s like people have forgotten what the whole point of randomisation is.
3 Likes
Good questions, the closest we could find within the canonical methodology literature is this paper but it is not typically cited in the actual RNCT papers. Unless we are missing something, it appears to be mostly an accidental development that began quietly via publication of small RNCTs in specialized oncology journals but is now aggressively expanding over time both within oncology and beyond.
We realized we had to alert the methodology community of this new meme after an RNCT was published in the Lancet without even stating anywhere in the title or abstract that this is a non-comparative trial. The only way to know is to look deep in the methods section of the full text or notice in the abstract that the treatment and placebo groups are not actually compared with each other. This hit particularly hard because one can tell there was immense effort to conduct and finish this trial dedicated to a particularly rare cancer group (metastatic phaeochromocytomas and paragangliomas) which is an unmet need. It is possible that those involved in this study would have chosen a different design if they knew that the “randomization” in RNCTs does not confer the evidence-based medicine aura of standard RCTs.
I have never directly interacted with the statisticians that promote RNCTs but have heard from trainees who participate in oncology trial design workshops that RNCTs are recommended by biostatisticians in scenarios whereby there are not enough resources (e.g., funding or patient numbers) to meaningfully power a comparative RCT. In this situation, RNCTs are recommended as a way to have our (random) cake and eat it too.
Really glad others like @simongates have also noticed this problem. We tried first to publish our paper in clinical journals and got thoroughly rejected by their statistical editors mainly due to low interest in showcasing this issue. We then submitted to the Journal of Clinical Epidemiology where the peer reviewers and editor appeared surprised that RNCTs exist, which was a relief. There does appear to be a disconnect between statisticians immersed in methodology and at least some of the statisticians serving in the editorial boards of clinical journals. The two groups do not always overlap.
3 Likes
I find it incredible that this sort of rubbish is being published in “top” journals like Lancet and JCO, which make grand claims like “the single most credible, authoritative resource for disseminating significant clinical oncology research.”
I think that, in oncology at least, part of the root of this practice may be that people are very used to seeing single-arm trials. These are often used in oncology, sometimes for good reasons, but are often over-interpreted. There’s a very limited amount you can say about comparative efficacy from a single-arm trial (without getting into the complicated business of making appropriate comparisons with historical controls, which is rarely done), but that doesn’t stop people doing it. This really struck me when I started working in oncology a few years ago.
I had a look on the Lancet and JCO websites but I couldn’t see any information about their statistical editors - does anyone know if this is available anywhere? (I had a fun exchange with the statistical editors of NEJM a few years ago about their requirement for p-values in baseline tables of RCTs - which they have now changed, several years later).
4 Likes
There is a general theme that bothers me to no end. Statisticians often appear to prioritize helpfulness over principles.
2 Likes
MSKCC biostatistician Alexia Iasonos is listed as JCO’s Deputy Editor. See however this post; good luck!
1 Like
In fact, motivated also by your blogging of these NEJM exchanges we reviewed here the prevalence and implications of this “table 1 fallacy” in oncology RCTs.
On the Editorial Board tab on that page, there’s a whole list of biostatistics board members. I only recognise one or two names though.
edit to add the link again
https://ascopubs.org/jco/about/editorial-roster
2 Likes
Among those names, I recognized Boris Freidlin. He often co-authors methodologic papers with Edward L. Korn. Their work strikes me as thoughtful and substantive.
2 Likes