Time to benefit (TTB) often determined visually as the time it takes the Hazard curves or KM curves to separate. But, how do you determine the exact time when two cumulative hazard curves separate?. Thanks
I don’t remember seeing a paper about this. Interesting question. I think the sample size to determine this will be greater than you suspect. It is very important to obtain uncertainties around such estimates. Here are two overall approaches:
- Once you form a Bayesian survival model, take the posterior MCMC draws from that model and for each sample compute the derived quantity of interest, e.g., the lowest time such that the survival curves differ by more than 0.01. Take the 0.025 and 0.975 quantiles of these derived quantities to get an exact credible interval for the TTB.
- Use the bootstrap to get an approximate 0.95 confidence interval for TTB by taking samples with replacement from the original dataset. For example sample compute the derived quantity of interest, i.e., the TTB. Use one of the many bootstrap confidence interval constructors to get a rough 0.95 confidence interval on TTB.
Thank you !. After I posted it spoke to senior author of a paper
BMJ 2015;350:h166; http://dx.doi.org/10.1136/BMJ.h1662. “Time to benefit for colorectal cancer screening: survival meta-analysis of flexible sigmoidoscopy trials. This was their published methods.
To estimate a pooled time to benefit, we combined survival data from all four studies to obtain pooled annual risk reduction estimates, allowing the time to specific absolute risk reduction thresholds to be determined. Unlike most meta-analyses where the main statistic of interest (for example, hazard ratio with confidence intervals) is reported in individual studies, our main statistic of interest “lag time to benefit” (that is, the number of years until the absolute risk reduction crossed a certain threshold) was not reported by individual studies. To obtain the lag time to benefit for each study, we fit Weibull survival curves using the annual mortality data for the control and intervention groups, and we used the study specific curves to estimate annual absolute risk reductions and to determine when specific absolute risk reduction thresholds (1:5000, 1:2000, and 1:1000) were crossed. Then, with the simulated parameter values we used Markov chain Monte Carlo methods to obtain lag times and 95% intervals for individual studies. To pool lag times to benefit from individual studies, we fit a random effects Weibull model using Markov chain Monte Carlo methods, allowing both the scale and the shape parameters to vary for each arm of each study.26 Using 100 000 Markov chain Monte Carlo simulations, we obtained point estimates, standard errors, and confidence intervals for annual mortality rates in control and intervention patients for each individual study and for the random effects meta-analysis model. From this model we obtained pooled estimates of annual absolute risk reduction as well as pooled estimates of time until specific absolute risk reduction thresholds (1:5000, 1:2000, and
1:1000) were crossed. We performed Markov chain Monte Carlo computations using the Markov chain Monte Carlo procedure in SAS for the individual Weibull curves and OpenBUGS/BRugs for the random effects Weibull model (see appendices 6 and 7 on bmj.com). We utilized similar methods to determine the time to benefit for screening fecal occult blood testing and screening mammography in a previously published study.”
Since I work in STATA and not a statistician will have to recode this based on their appendix. I can post if I figure it out later.
The concept of Time to Benefit and Time to Harm is particularly useful for older adults with multimorbidity where a shared decision-making has to be done prior to an intervention taking this into consideration. Not sure how a single trial vs multiple trials affect this.
I have one worry about this line of pursuit. Absolute risk differences expand with baseline risk, so a sicker patient should have a quicker time to response than a less sick patient. This is a property of the patient, not a property of the treatment. And on the other end of the time scale, everyone eventually dies so the treatment effect has to wear off. These two concerns make me think that the relative hazard scale should be revisited.
I think the Royston Parmar spline model, which is implemented both in stata and R allows to do exactly that in a nice way.
Could you please point me to that paper ? Thank you
With the rstpm2 package in R you can plot time-varying differences in survival and hazard functions