Robust Estimation of Marginal Effects (and Uncertainty) using Bayesian Additive Regression Trees

Hi all. Thanks to all for this frequently fascinating board.

My research interests at present center on using observational datasets to support causal inference. I’m well aware of the philosophical, analytical, and practical issues presented by trying to take this approach to data analysis, so we can stipulate some understanding of the topic as we start. I’m curious about opinions regarding the following specific approach to a specific problem, and what people think is the best next step.

The issue : I’d like to estimate the degree of association between a non-randomized binary treatment (RX) and a binary outcome (OUTCOME) where I have fairly complete knowledge (and data) on many variables that are known to both increase and decrease the risk of the outcome.

I’ve never much liked propensity scores or matching (due to the assumptions of the propensity score approach to modeling the assignment mechanism, and the loss of data inherent in matching methods), and as an alternative I’ve been exploring Bayesian Additive Regression Trees (BART) as a method that both flexibly models the response surface and obviates the need for modeling the assignment mechanism. For those who aren’t familiar with the method, here’s a link to Jennifer Hill’s fantastic paper on the method.

My specific question comes with reporting inference from a BART model. Hill advocates for using the conditional average treatment effect (CATE) or conditional average treatment effect on the treated (CATT), which represent estimates of the average effect of treatment over the population (defined either as all or just treated individuals, respectively).

So question #1 : in both CATT and CATE, an individual’s predicted and observed OUTCOME given the BART model are used to generate an individual treatment effect, and then a population mean of these estimated effects are used to generate the CATE or CATT. For my purposes, I would like to state both a ratio and an absolute reduction in the probability of the outcome. Given the uncertainty in the model, what is the proper way to create an uncertainty band for the absolute change in probability and the ratio (the relative risk)? It doesn’t seem to me that taking a quantile of the CATE or CATT is appropriate, as doing so would estimate uncertainty in the population, rather than in the model.

Having asked that question, let’s presume for a moment that I don’t want to look at the change in probability of the outcome on the population level, as doing so will be highly dependent on the distribution of relevant covariates in the population. Rather, what I’d really like to state for the reader in an estimate of the marginal change in probability fo OUTCOME (absolute and ratio) associated with administration of RX. If one assumed a lack of interactions between treatments, one could do the standard “set covariates to medians and predict the outcome,” but part of the point of BART is not having to assume simple linear relationships and the absence of interaction between variables. Given the way that BART works, it seems to me that applying Friedman’s partial dependence (PDP) to the BART model is a reasonable way of generating an estimate of the marginal effect.

So question #2 becomes : (conditional on the belief that partial dependence is the best method for estimating the marginal association between RX and OUTCOME) what is an appropriate mechanism for estimating uncertainty in the marginal effect as estimated by PDP?

Two potential approaches that come to mind are 1) bootstrapping the dataset used to generate the BART model (computationally quite expensive), and 2) a PDP of BART, at baseline, returns a population mean at each level of the covariate matrix, and then a population mean of means is returned. Would taking a mean of 5th and 95th quantiles generate a robust estimate of uncertainty?

This set of questions has been quite text heavy, and light on equations or graphs : if visualizing some elements of this question would be of help to those proposing answers, please let me know and I’d be happy to provide such visualizations.

In advance of any response, thanks for reading this far!

2 Likes

This paper deals with uncertainty estimation for random forests, not BART, but I wonder if it might be helpful in an indirect way: Confidence Intervals for Random Forests: The Jackknife and the Infinitesimal Jackknife

Thanks for your message, @LRSamuels ! I’m not going to claim enough statistical chops to fully evaluate the paper you linked, but I think it’s addressing a slightly different problem than the one I’m trying to pose.

I’m going to try to restate the problem one more time, as I see that many people have looked at this thread but you’re the first to comment.

The output of BART is a full Bayesian posterior distribution: for a given set of covariates, BART gives me the ability, as a Bayesian, to sample a median (or any other quantile).

What I want to describe is the marginal effect of a given predictor on the output of the model, as well as the uncertainty in my estimation of the marginal effect. Estimating a “causal” estimate for a given observation is quite simple with BART. Estimating the uncertainty of an individual estimate is similarly easy. Using methods like partial dependence, making an estimate of the mean effect, averaged over all the other potential levels of covariates, is similarly easy. What I’m not clear on is how to aggregate the uncertainty in the individual observation estimates into a global measurement of uncertainty in the marginal effect estimate.

Anybody care to comment on that more directed single question?