Optimal timing for rescue strategy after repeated failed treatment attempts

Hi all

In a mechanical thrombectomy, we try to declog an artery through device passes. You do one pass. If it works, you had a successful procedure. If it doesn’t, you have the option to 1) stop, which is not desirable and not frequent in the beginning; 2) try again; or 3) use a rescue strategy that is very effective in declogging, but very aggressive, with a higher risk of complications and a huge long-term burden for the patient.

Our team wants to define the optimal time for using the rescue strategy - how many times should you try the conventional strategies before rescuing?

The outcome is the clinical state of the patient after three months. We have a huge observational database with granular procedural data, including the strategy and response per attempt.

I would probably approach this through a per-pass comparison (e.g., for the second attempt, compare patients who got rescue therapy versus who did not, with the appropriate covariate adjustments), but I wonder:

  1. Whether that helps me define the optimal timing (after how many attempts) for rescuing or if it would only show me that the benefit eventually decreases over time;

  2. If the fact that patients without success in a given attempt would be included in the further can be problematic somehow;

  3. If I should ignore whether the patient underwent rescue therapy after the 3rd, 4th, 5th… pass when in the control group (new pass with conventional strategy) for the second pass.

This per-pass approach seems sound to me, but I have a feeling that it’s not the best way to address this question. I am also thinking about estimating the outcome distributions given each moment, strategy and a fixed set of covariate values and compare these estimates, but the overlap mentioned in (2) makes me question whether this would be appropriate.

An approach that could handle data from multiple passes at once and provide a more straightforward answer like “Changing after X failed attempts is the way to go” instead of “The benefit of rescuing decreases this way over time” would add a lot to this project. Is there any method you would recommend to address this question? I’d also appreciate references or examples of articles with similar scenarios.

Thanks in advance

1 Like

I didn’t take the time to fully absorb the study setup but I think that modeling the finest-grain data with a multi-state transition model. If you have multiple ordered categories an ordinal Markov longitudinal model can come into play as detailed here. There will be an absorbing state (and termination of records in the tall and thin dataset) of “complete success”. Once you fit the longitudinal model you can use it to estimate the probability of being in a certain state over time, the expected time in any subset of states. Using the latter you can estimate the expected time in an unsuccessful state, which is virtual the same as expected time until success.

1 Like

Thank you @f2harrell. I agree that multi-state models could help defining the optimal timing when we think about procedural success. Do you have any suggestions for looking at the optimal timing when it comes to evaluating later outcomes, such as clinical status months after the procedure? Or would a per-pass analysis be the most appropriate/interpretable approach?

Looking forward to rms next month

Glad you’re in RMS. I haven’t thought about predicting long-term outcomes from short-term results. Perhaps it’s best to start with an idealized design where you could assess the patient’s outcome level at every time unit with an absorbing state of complete success. Then use estimates from this short-term longitudinal analysis to predict a single long-term outcome. Sometimes it’s easiest to do this with a landmark analysis where the short-term phase is conditioned on and becomes a series of new baseline variables for long-term prediction. That assumes no dropouts or missing data short-term.

1 Like