Suppose there is a treatment studied in large trials. It has recently been approved by the FDA. The trials were analyzed under a frequentist framework. They were not powered (individually or cumulatively) to study some adverse effects we suspect might indeed exist when the treatment hits general use. They did not show increase in risk of this adverse event.
Now suppose we want to conduct a retrospective study using routine clinical and administrative databases to look at those adverse effects by comparing this new drug to outcomes without this drug. The drug is new enough that many eligible patients don’t yet have it prescribed because doctors aren’t yet familiar with it.
The limitations of this sort of study are well appreciated. When the sample size is large enough, even small amounts of confounding common in such studies will produce “statistically significant” results in the frequentist framework. We all view these small effects with skepticism.
What if we were to analyze this study using a weak skeptical prior – skeptical against higher risk of the adverse effect in question. I haven’t seen this before but am considering doing.
Do you have any examples or guidance here?
Recognizing I’m not an expert here my gut reaction is to suggest taking a look at what the folks involved in sentinal/minisential (US) and cNODES (Canada ) do, since this time of situation is the reason they were formed.
My other reaction is that if you concerned about a biased effect you might be better off trying to quantify what that bias might be and model it directly, since a sufficiently large observational study would eventually overwhelm any prior you put on the coefficient. You could still use some sort of regularizing prior on your treatment effect but I see the purpose being slightly different.
An example of this where you combine both sets of information in a meta-analysis would be this paper by Schmitz et al. This is within the NMA context, but the concepts carry over to pairwise environment. Brian Hutton’s dissertation also has a lot of good material on this question.
In either case, putting some good thought into a causal pathway would go a long way in making your findings more credible. Lastly, there might be some sort of interesting hierarchical application you could exploit i.e. maybe your database/rcts or well conducted post-market surveillance studies can arrive at a good, unbiased estimate in a closely related drug (e.g. same class or pathway) that you could borrow strength from.
-
Schmitz S, Adams R, Walsh C. Incorporating data from various trial designs into a mixed treatment comparison model. Stat Med. 2013;32:2935–49.
-
Hutton B, Joseph L, Fergusson D, Shapiro S, Mazer D. Risk of death with aprotinin in cardiac surgery: A bayesian evidence synthesis of randomized and observational studies. Clin Trials. 2011;8:476. doi:10.1177/1740774511413037.
1 Like