Resources for handling blocked randomization in the analysis of RCTs

Hi, all,

I’m looking for some resources (textbooks, online resources, papers) or good examples on how to handle blocked randomization in RCTs. I.e., where we have random block sizes of, say, 2 or 4, and participants are randomized within each block (2: AB or BA; 4: AABB, ABAB, BABA, or BBAA). I’m aware of the criticisms of blocked randomization!

My intuition is that something like a mixed-effects model with block treated as a random effect would be appropriate. However, with such small block sizes, can we actually estimate tau (random intercept, post ~ pre + treatment + ( 1 | block )), and if we include the treatment effect (random intercept and treatment effect, post ~ pre + treatment + ( treatment | block )), the covariance matrix?

Any resources or information would be much appreciated. If you suggest a book, please include the chapter, section, and/or page!

2 Likes

check out: Consort

I’m not sure if block needs to be treated as a random effect or adjusted for in any way - have not encountered an example yet. The idea of incorporating blocks is to ensure whenever we stop randomizing anytime in the study, the number of subjects randomized to each treatment group would be roughly the same. For example, if the block size is always 4 and there are no stratification factors then the largest difference in the number of subjects randomized between groups would be 2. So you can use this knowledge to cross-check potential errors in the randomization scheme.

Permuted random blocks can be used to increase unpredictability of the blocks to improve concealment*. It’s important that only the unblinded statistician performing the randomization should know the block size(s) for concealment purposes.

*Note that concealment is not the same concept as blinding. Blinding may not be possible for all trials (e.g., surgical procedures etc.). Allocation concealment is possible with all types of trials, including unblinded trials by using a centralized service, since it cannot be subverted by investigators and provides independent verification that it was not possible for the investigators to know the allocation sequence in advance.
Concealment means the person randomizing the patient does not know what the next treatment allocation will be. They cannot predict who is going to receive a certain treatment.

1 Like

Many thanks. Concretely, my question is strictly about how to handle the analysis of a block randomized RCT. I’m pretty familiar with its conceptual nature (along with other randomization schemes) and its role in trial design, blinding, etc.

It’s my understanding that blocks need to be considered in the analysis since that’s the level at which randomization was performed; i.e., people are exchangeable within but not across blocks. This becomes especially important if there’s some general time effect across the duration of the experiment, in which case, accounting for the block structure can improve precision. Consider the following from Fundamentals of Clinical Trials by Friedman et al. (2015):

A disadvantage of blocked randomization is that, from a strictly theoretical point of view, analysis of the data is more complicated than if simple randomization were used. Unless the data analysis performed at the end of the study reflects the randomization process actually performed [26, 28–30] it may be incorrect since standard analytical methods assume a simple randomization. In their analysis of the data most investigators ignore the fact that the randomization was blocked. Matts and Lachin [26] studied this problem and concluded that the measurement of variability used in the statistical analysis is not exactly correct if the blocking is ignored. Usually the analysis ignoring blocks is conservative, though it can be anticonservative especially when the blocks are small (e.g. a block size of two). That is, the analysis ignoring blocks will have probably slightly less power than the correct analysis, and understate the “true” significance level. Since blocking guarantees balance between the two groups and, therefore, increases the power of a study, blocked randomization with the appropriate analysis is more powerful than not blocking at all or blocking and then ignoring it in the analysis [26]. Also, the correct treatment of blocking would be difficult to extend to more complex analyses. Being able to use a single, straightforward analytic approach that handles covariates, subgroups, and other secondary analyses simplifies interpretation of the trial as a whole. Performing the most correct analysis is even more problematic for adaptive designs, as discussed in the next section.

Given this, it’s not totally clear to me how to “properly” handle blocking within the model. With random effects or cluster-corrected SEs, aren’t there issues with estimating the variances with such small blocks? I’d really like to read more about this.

Blocking is often done for convenience, using a process in which the blocks are crude approximations of the variables you really care about. So if for example you are blocking on age interval for balancing crudely on age, it is not appropriate to put block in the model but rather the underlying continuous age variable as a covariate. Covariate adjustment should use a superset of blocking variables.

Thanks, Frank. I’m not referring to stratification based on some covariate (e.g., age)—just blocking the randomization without any consideration of the subjects’ attributes.

Sorry I confused two things. So you are referring to just keeping the allocation ratio balanced in successive calendar time chunks. I’ve never seen anyone have to consider that when modeling the RCT data that results. Clustering (random effects) is done only on geographical areas such as clinical sites.

2 Likes

Hi,

I was going to offer some additional thoughts, but I think that the prior replies already cover those for the most part.

The only point that I would add at present, is that in ~40 years of being engaged in clinical research, either as the study statistician or sitting on DSMBs, I have never seen anyone use or propose to use a mixed effects model as referenced in the setting described above, where the randomization blocks are included as a random effect.

I have certainly seen other clustering effects, such as site in a multi-site study, be evaluated, at least as a sensitivity analysis, and that is recommended, for example, in ICH E9.

2 Likes

This happens to me too. What is the difference between “stratifying” vs “blocking” for specific covariates in an RCT? As far as I can tell, people use the two interchangeably but “stratification” comes from sampling theory whereas “blocking” comes from experimental design.

Thus, “stratification” implies a procedure that almost never happens in RCTs, i.e., that patients were sampled from a population based on a stratification factor such as age. Instead, enrolled patients (typically a convenience sample) are randomized within blocks based on specific blocking factors such as age. It is even more preferable to covariate adjust for the blocking factor.

Related discussion with @ChristopherTong and @ESMD on the differences between random sampling and random allocation.

2 Likes

Thanks. Yes, exactly. I’ve definitely seen clustering for geographical area and/or site, and I haven’t seen anything for blocking either. I’m quite surprised by this…

Have you seen any explicit modeling of the blocked randomization structure? If so, what have people done?

I see there being a pretty clear distinction between stratified sampling and stratified randomization. I like the way Friedman’s text distinguishes between blocking (general) and stratified (conditional on covariates) randomization. I agree though in that all of these terms are confusing as they’re used differently from how they’re defined in their original literatures. It also makes searching extremely difficult!

2 Likes

Presuming that you are referring to the analysis of study endpoints that includes some kind of IV for the randomization blocks themselves, even if not in a ME model setting, no. I honestly don’t even recall any discussion on the topic.

BTW, I should note that I have the 1998 third edition of the book in question, and they do reference the same, at least theoretical, concept of considering the randomization blocks in the analysis, with some references as well. The wording from that paragraph appears to have been somewhat updated in the 2015 fifth edition that you quote above. In the third edition, that content is on page 66.

That the topic goes back at least 25 years in this book, and that there does not seem to be any indications as to the broad use of the suggested approach in practice, might suggest that outside of some kind of internal sensitivity analyses that do not get published, the approach is not prevalent.

So it may be one of those situations where theoretically it might be interesting, but in practice, the impact is not material, thus it is not generally done.

3 Likes

Yup. Friedman’s textbook acknowledges the confusion and even comments that other texts use the term “blocking” in the same way that the Friedman textbook uses the term “stratified randomization”. See here a fantastic recent article that associates “stratification” only with sampling and reserves the term “blocking” for experiments such as RCTs.

I am far from a terminology absolutist. People can use whichever terms they prefer. But I did want to make sure that there was not some method/procedure I was missing. It appears that is not the case.

Pragmatically, I have seen far too many clinicians, and at least some statisticians, make grave inferential and design errors by confusing random sampling with random allocation. Hence my preference to keep the two as separate as possible by distinguishing “samples” from “cohorts”, “representative sampling” from “representative causal mechanisms”, “generalizability” from “transportability”, and “stratification” from “blocking”.

2 Likes