Hello,
There’s this clinical trial where the study drug was tested with the following scheme: 1 substudy where the study drug was compared to a placebo comparator (both flat-dosed), & another substudy within the same trial where the study drug was also compared to a placebo comparator, but with a regimen where both arms were dose-escalated after 3mo, up to a given level.
As this scheme is not that common for me, my question is the following: how do you have to handle the type I error in this case? It depends on the question(s) you’ll tell me, but I just wanted to ask around, because it’s an interesting case.
Personally, I think the FWER has to be managed since the same question is asked (does the study drug differ from placebo), so I’d split the alpha, but I am not sure.
here is the link to the trial record in question
https://www.clinicaltrials.gov/ct2/show/NCT01433497
I know you can handle this multiple testing in other ways than the alpha spliting, so you can generalize my take on split the alpha or any other method to handle such case.
But there are also several sub-questions:
- is it a real case of multiple testing?
- the company decided to use this 2-parallel group with each substudy having its comparator arm (to keep the double-blinding for the number of tablets administered I presume), but was this mandatory? If not, what would have been the implications in terms of statistical comparison? Just a bias on the fact one arm would not have been blinded?
So perhaps the answer will be obvious for some, but I am not certain about how to view things.
I tried to have a look in the related topics on the forum but I saw nothing close, sorry if this is the case.
1 Like
I ended to have more insights on the design. The second substudy (titrated intervention vs titrated placebo) was added to the protocol after an amendment. Originally, the design of the study was 2 different flat doses of the intervention vs placebo, but the arm for highest flat dose was discontinued early ino the trial, after a sponsor decision. The second substudy was then added to replace the highest flat dose. Due to the fact it was added ‘on the fly’, and with a titrated scheme, the SAP of the main study was then modified to allocate an alpha of 0.05 for each the main study (flat) and 0.05 to the substudy (titrated). As there was an interim analysis, there was an alpha spending that lowered the significance level down to 0.0296 for the main study. The second substudy was performed with a delay with respect to the main study, and used different centers. Not a simple story!
1 Like
I always have trouble wrapping my head around dose titration studies and studies where treatment arms are dropped or added. In the frequentist world there are real type I assertion probability issues as you have summized. For future studies a Bayesian design has many advantages. Chief among these is that assertions that Bayes is computing the probability of are assertions about unknown parameters such as treatment efficacy. Such probabilities come only from the prior distribution and the data. On the other hand, p-values are computed from fixing the unknown treatment effect at a single value (zero) and computing probabilities of observing data. Design staging influences the latter in complex ways, because design changes give you different probabilities of observing data. The parameter space is your friend. The data space is a hassle.
2 Likes
I have never seen an application where frequentist inference implodes more spectacularly than in titration studies. Titration by its nature acknowledges a learn-as-you-go principle that is utterly Bayesian to its core. I expand on this idea at length (and in some philosophical depth) in my DTAT paper [1]. Interestingly, there is even a connection with Frank’s comment “the parameter space is your friend” in this paper, where the parameters of special interest are those governing the dose titration algorithm (DTA).
- Norris DC. Dose Titration Algorithm Tuning (DTAT) should supersede ‘the’ Maximum Tolerated Dose (MTD) in oncology dose-finding trials. F1000Research. 2017;6:112. doi:10.12688/f1000research.10624.3
1 Like