An article I read the other day caused me to view this trend in a much more cynical light than I had up to this point. Is it possible that this lipstick-on-a-pig trend with non-comparative randomized trials is being driven not so much by methodologic naïveté, but rather by mercenary interests of companies that depend on private investment ?
I understand less than nothing about the financial world. But aren’t there people who would stand to benefit from methodologically nightmarish mid-phase efforts to turn chicken *!@# into chicken salad?
Maybe this angle is unfairly uncharitable. But why would the biotech industry be so different from the tech industry?
“It’s not simply that one piece of technology is overhyped, it’s that hype is a necessary ingredient of the current business ecosystem of the tech industry. We should examine how often the financial incentive for hype is rewarded without any real social returns, without any meaningful progress in technology, without these tools and services and worlds ever actually manifesting. That’s key to understanding the growing chasm between the narrative of techno-optimists and the reality of our tech-encumbered world.”
This might be taking the discussion in a direction that deserves a thread of its own…
Part of the motivation behind “randomised non-comparative trials” may be the over-use and inappropriate use of single-arm trials. I have become pretty convinced that in oncology in particular, and maybe other areas of medicine too, there is a problem with what I’ve called “single arm thinking” which is the idea that it’s possible to evaluate treatment effectiveness using single arm trials.
Actually, that is probably not completely wrong, but to do so requires a lot of careful work in understanding/modelling the outcomes that would be expected under the control treatment, or constructing external control groups etc etc, not to mention being very realistic about the extra uncertainty involved in such methods.
That isn’t what almost all single-arm trials do though. Repeatedly I see single-arm trials that say they aim to “evaluate a therapy” or “estimate the response rate (or whatever) of a particular therapy,” with no acknowledgement that the results you get from a single arm are going to be VERY dependent on the patients that get recruited. Patient heterogeneity and selection bias are real issues! If there is a comparison, it is almost always with some assumed historical rate, with no guarantees that this applies to the patients in the trial.
Of course, single-arm trials have legitimate uses – but the fact that they can be useful in some circumstances seems to have led to serious mission creep, so they now get used in many situations where they are just not suitable. I get that they have attractive features – everyone gets the exciting new therapy and there is no need to explain randomisation. But that isn’t worth the cost of making the trial useless.
This was brought home to me by a recent experience with a randomised multi-armed oncology trial, which made it all the way through to being funded, without specifying what comparisons it was going to make – because apparently everyone was just thinking about each arm in isolation and assuming that the results of each single arm were meaningful in themselves.
I don’t know if I’m overstating this issue – maybe I’ve just had some bad experiences – but it almost feels like there is a collective conspiracy to delude ourselves.
This was brought home to me by a recent experience with a randomised multi-armed oncology trial, which made it all the way through to being funded, without specifying what comparisons it was going to make – because apparently everyone was just thinking about each arm in isolation and assuming that the results of each single arm were meaningful in themselves.
Indeed. It is also related to the frequent practice of making treatment group-specific inferences by showing 95% CI for each treatment group in RCTs. This indirectly equates them with the far more reliable 95% CI for comparisons.
On a related note, just gave last week an ISBS webinar lecture to applied biostatisticians raising awareness of RNCTs (from 40:50 onwards). The beginning of the lecture emphasizes that single-arm trials can be helpful in certain situations leading, e.g., to the recent improvements in survival outcomes of patients with renal medullary carcinoma, a clinically and molecularly homogeneous highly aggressive malignancy. But in many other contexts we should randomize and then dare to compare.
At this point we have more than done our due diligence highlighting this issue across medicine. It is time for others, including statistical and clinical peer reviewers and editors, to step up and contain this torrent.
This is the authors’ stated justification for using the RNCT design (bolding is mine):
“The randomized, noncomparative design was selected over performing two sequential, single-arm phase Il studies to distribute unknown prognostic characteristics and to reduce time trend bias related to the emergence of human papillomavirus-associated disease in HNC clinical trials…”
I don’t understand what their bolded phrases mean- do you?
Also- can you summarize the purpose of Phase II trials in oncology? Specifically, can you explain what function randomization serves in phase II in fields like oncology, where biological activity of a therapy can be discerned without implementing a trial design with more than one arm (since tumours don’t spontaneously resolve over time)?
Addendum- Uggh #2- Never mind Pavlos. I just came across a 2019 publication on exactly this topic: Grayling M et al; Review of Perspectives on the Use of Randomization in Phase II Oncology Trials. Clearly this is much too big a question to be tossed out so casually. What a dog’s breakfast (the topic, not the article)…Also managed to find the very first published RNCT- the authors were Frankenstein, V et al…
This is the problem: in many single-arm trials you can’t confidently ascribe biological activity to a new treatment, because patients receive other treatments. Very often the question is about adding a new therapy to existing treatments, which requires randomisation (or some hard work to produce an appropriate external control group).
This “Multiple opportunities exist…” section seems particularly strong to me, with so many constructive [and well-referenced] alternative methods. Yet Ferris, Bauman, Wang & Zandburg tellingly ignore them in their reply.
4. Take responsibility for the face of the world. The symbols of today enable the reality of tomorrow. Notice the swastikas and other signs of hate. Do not look away, and do not get used to them. Remove them yourself and set an example for others to do so.
I recently read Mark Bray’s Antifa book, and was impressed by the story of one European antifascist activist who just kept plastering over nazi graffiti for months on end until the nazi simply gave up.
Yup. We even just published a practice-informing randomized comparative phase II trial (I don’t think we knew RNCTs existed when we designed it) with n=90 patients which is in the sample size range of all these RNCTs. There is no excuse to avoid prespecifying comparisons.