Say I’m planning a randomized trial (time-to-event outcome) with 4 sites and want to balance randomization for each site. Am I then committed to stratifying by site in analysis? Or including site as a fixed effect in regression models?
My understanding is: stratified randomization -> stratified analysis
and block-randomization for balance —> consider factors as fixed effects
Balance is obviously good, and stratification may be necessary if baseline hazards have a very different shape. But as you stratify by more and more factors, moving toward matched-pairs randomization, the large number of strata seems to decimate study power.
I’m available with the literature pointing out drawbacks of matched case-control studies (Pepe et al, Clin Chemistry 2012). Are there any manuscripts or texts that lay out these issues for trial design and analysis?
1 Like
yes, according to ich e9 section 5.7 [ICHE9]
stephen senn has written about randomisation and balance eg when he was respondiong to that paper in social sci and medicine: "Perfect balance is not, contrary to what is often claimed a necessary requirement for causal inference, nor is it something that randomisation
attempts to provide. " [blog post]
2 Likes
To add further to what Paul nicely stated, Senn has stressed that the analysis model should dictate the execution of the randomization and not vice-versa. And I think that blocked randomization is overused. The only real reasons I can think of blocking within center are
- it’s hard to maintain blinding if you don’t
- you don’t want to induce any kind of calendar time effect on outcomes if randomizations at one center end up being AAABBB
I’d love to hear more thoughts about that.
3 Likes
@Stephen 's 2004 paper with the following quote is one of the most under appreciated papers when trialists are constructing their designs:
The decision to fit prognostic factors has a far more dramatic effect on the precision of our inferences than the choice of an allocation based on covariates or randomization approach and one of my chief objections to the allocation based on covariates approach is that trialists have tended to use the fact that they have balanced as an excuse for not fitting. This is a grave mistake.
My view . . . was that the form of analysis envisaged (that is to say, which factors and covariates should be fitted) justified the allocation and not vice versa.
When a new trial is planned, investigators seem to automatically assume that blocking (e.g., every 6 patients enrolled at a site must have 3 assigned to treatment A and 3 to treatment B) and stratification (e.g., by crude arbitrary dichotomization of baseline severity of disease). They later make the big mistake of not adjusting for continuous severity of disease in their final model.
What is the best paper that explains why blocking and stratification are not so important?
I’ve thought that quotas are more logical than stratification. For example, if you want to make sure that a socio-economic status (SES) group is not underrepresented in a randomized trial, you can either quit enrolling those in the other groups once you have enough, or enroll patients using up-front probability sampling. That doesn’t mean that SES needs to stratified on, nor necessarily included in the final covariate-adjusted model (it may be logical to include SES as a covariate in many cases but SES should be represented by a continuous measurement when doing so, if possible).
2 Likes
Some repllies on twitter:
If you have a factor that you will fit in the model, then blocking that factor will make a contribution to efficiency. The lower bound for the effect variance is 2(sigma^2)/n. This is only reached when everything is orthogonal. In practice randomisation gets close to this.
Quotas are generally a bad idea for reasons that are given in chapters 9 and 25 of Statistical Issues in Drug Development.Quotas are generally a bad idea for reasons that are given in chapters 9 and 25 of Statistical Issues in Drug Development.
The argument against quotas is that recruiting is a continuous process. You condemn the trial to proceed at the rate of the slowest recruiting group and to maintain the quotas you have to stop recruitment from the faster ones to let the slower catch up. The net result is that the trials become inefficient as regards the primary purpose and that is nearly always hard to achieve anyway.
2 Likes
Sorry that I missed this at the time @Brenda_Kurland
Brennan Kahan, Tim Morris, and Michael Harhay (and others…) have done some work related to this. A few references you may find useful:
I think there are more similar papers but this may be a useful start for more discussion on this topic…
1 Like
Building from this: the temporal-trend is one issue I hear expressed (e.g. worry about introducing bias in the estimate of the treatment effect if outcomes for the disease in general improve over time or vary seasonally and the randomization sequence happens to look like you showed here). Another with respect to blocking at each center is that I think many people are concerned that one particularly well- or poorly-performing center may exert undue influence over the results if they have a preponderance assigned to the same treatment, and they are more comfortable with a technique that results in (roughly) equal allocation within each of the centers.
I think you can argue that this is related to our field’s tendency to put a lot of attention on the point estimate and less on the uncertainty.
1 Like