ok so I have read now all about the evils of period-specific hazard ratios essentially resulting from selection bias because “susceptibles” have already come out of the risk set in the group that is experiencing the event sooner, thereby making that group look relatively better in the later time periods and you see the HR start to reverse course.
Imagine now that there is a bad event, a heart attack, and we follow people out from that heart attack and compare them to other people who were in hospital at the same time but did not have a heart attack. Initially the folks who had the heart attack start to die at a much more rapid rate than those who did not have the heart attack. The aHR associating heart attack with death is 4 in the first 30 days. Thereafter you clearly see the survival curves start to become more parallel, and as you would expect, the aHR begins to decrease to 3, then to 2, then to 1.5…
Is it really a biased interpretation to say that the susceptible patients who were going to die soon due to the heart attack have mostly died up to some time period and those who survived thereafter are clearly not as susceptible?
The problem with the period-specific hazard ratios is obvious with estimation of treatment effects: if you give half the patients a medication that kills those susceptible to its harmful effects who may be older and more frail, you only leave healthier people after time t to compare against a control group in whom mortality follows a natural history so the HR goes from harm to benefit.
(putting prediction issues aside and just focusing on questions of etiology, I recognize the issue of arbitrary discontinuities in time but the reality is that hazard ratios for specific times make things a heck of a lot more interpretable, especially when they make sense given what you see in the survival curves.)