The main analysis from most trials is the intention-to-treat analysis. This compares outcomes between the, here, 3 trial arms, and is interpreted as the difference in outcome if everyone had been randomized to arm A vs B or B vs C or A vs C. Since the intention-to-treat effect is the effect of randomization, we don’t need to worry about changes to the intervention after baseline (changes to the outcome definition are an entirely different matter).
But usually we really want to know about effect of receiving one versus the other interventions. Now, changing the intervention partway through the trial is important to consider.
If everyone adhered perfectly, was not lost to follow-up, and there were no changes to the intervention, the intention-to-treat analysis will also be an estimate of the intervention effect. But, in this trial, since there are deviations after baseline, the intention-to-treat effect is no longer a good estimate of the intervention effect.
In a two arm trial with a placebo control, when the intention-to-treat effect is not a good estimate of the intervention effect, it is usually closer to the null. But in a three arm trial, or a trial with a comparison group that isn’t placebo, this won’t necessarily be true. So, we do know the intention-to-treat effect in this trial is a poor estimate of the effect of the intervention but we dont know if it is an under-estimate or an over-estimate of that effect.
Estimating the intervention effect would require (1) a clear definition of the interventions you want to compare (eg. should they include this deviation or not); (2) data on all predictors of intervention received, and this data needs to have been collected at baseline and over follow-up; (3) statistical methods that account for time-varying adherence-confounder feedback.