I was asked to design a trial with strict inclusion criteria. I am concerned that the population will be uncommon and consent refusal rate will be high and I have no prior data to generate any estimate. I need an estimate of possible recruitment rate from a feasibility study.
From a financial point of view, I believed the trial would be viable if each center that would run the trial could randomize 1 patient per week. Knowing how many patients each center admits a week and considering I can run the feasibility in a sample of 20% of all candidate centers, for how long should I run the pilot until I get a reasonable estimate that the mean recruitment rate of the definitive study will be above 1 patient/week? I would consider the larger trial is unfeasible if there is a 70% chance recruitment will be lower the selected threshold.
A Bayesian approach seems warranted, but I would appreciate any guidance here.
This is a great question. I don’t have the relevant experience to comment on precisely what is done, but I am extremely here for the answers, as this may prove useful in my future work as well.
Really not an exact science. Consistently, the best predictor of recruitment at a centre is recruitment rate at that centre in previous studies. Buy-in from local investigators is essential. For this reason the number of potentially eligible patients at a centre is not closely related to the rate of recruitment at that centre. In addition, the rate of recruitment speeds up over time as procedures at that centre get ironed out. The pilot is also a good opportunity to fine-tune the way the trial is presented to potential participants and by whom, as these factors are also influential. Inevitably things will arise as the pilot runs that should force protocol changes. Could treat this as a saturation exercise where you run the pilot until no new major issues are identified for a few weeks and you then have a reasonable sense of how things will run.
I agree with Pavel_Roshanovon the site characteristics as a predictor of recruitment outcome. I would also add to do a focus group analysis with some potential investigators whether the design is feasible for recruitment.
Pharmaceutical companies generally conduct workshops with a sample of target investigators so they can design the study based on realistic accrual expectations but i don’t think this is a common practice in studies designed by academic centers.
I have extensive experience running both NIH and pharmaceutical company sponsored RCTs in community oncology setting. Some studies end up accruing really poorly because the inclusion criteria were too restrictive or impractical. If a requirement is not aligned with the site’s current practice, it may not enroll even one patient. The unrealistic follow-up requirements might also discourage enrollment.
Sometimes, even a pilot may not be informative about the success of recruitment if the pilot site is not a representative of the rest of the sites, but still will have benefits that Pavel_Roshanov mentioned.
Good luck.
Excellent comments, @aztezcan. Glad to see you contributing here. Those are very useful for anyone planning a study where they may not be familiar with challenges in recruitment and accrual.
One interesting problem is heterogeneity in practice patterns within sites. One of our largest clinical services was an enrolling site in a major cardiovascular outcomes trial that is still ongoing (recruitment closed, follow-up still going). We enrolled only 1 or 2 patients because the physicians within that area had differing views on the trial and I think most really didn’t care to enroll patients, so eventually our site PI pulled the plug and admitted that our recruitment was likely a total bust.
Also knowing studies that compete for the same patient population as your study will help during design phase. Sites will have to make choices; hence, monetary incentives and/or patient-focus might be deciding factors.
Yes! I am surprised, when I work with investigators on their study planning and sample size discussion, how many seem to neglect this factor. They’ll say “We put in XXX devices per year” without considering whether all of those patients will be eligible for their study after filtering out those who may be enrolled in competing trials.