Modeling pre-post intervention changes with no control

Considering observational assessment of differences in a primary outcome (iron in blood) over 2 time points with no control group. First time point is pre-intervention and 2nd time point is post-intervention. Interested parties feel that intervention will only increase iron for those individuals who already have low iron.

Question is how to test this while respecting regression to the mean. Specifically, how to estimate the increase in iron as a function of initial iron levels. Simple paired t-test doesn’t assess whether only low iron individuals were helped. Paired t-test for only lowest quartile suffers from regression to the mean.

2 Likes

Everything is confounded with the treatment effect with that particular design. It is not fruitful to pursue that question with that design IMHO.

2 Likes

Clinician here. Sorry for my colleagues, we tend to want causal inference even when the design does not allow it.

First of all, maybe they can be convinced to use hemoglobin instead. For hemoglobin (never heard for serum iron), an increase of 1g/dL is usually considered as demonstrating effectiveness of the iron supplementation, even if I can’t point you to a reference. Of course, even people with low “normal” (within lab’s reference range) hemoglobin levels might improve if their non-ferropenic levels are higher than the population average. So, you might compare the proportion of people with improved (> 1 g/dL) hemoglobin between baseline levels of serum iron or even use difference in hemoglobin as a continuous variable, discussing not being able to eliminate regression to the mean completely.

Other way might be using known variability of the lab tests to construct a hypothetical control. Not the same as having a proper design, but possibly good enough if limitations discussed.

How do you know when that’s good enough?

1 Like

I don’t have any references for using > 1 g/dL improvement in hemoglobin as a marker of response. That would be necessary for using it in research, for sure.

leonardof, response was hemoglobin, apologies for not clarifying that.

What I ended up doing was as follows:
1.Breaking hemoglobin pre into quartiles (in prep for a pre-post comparison)
2.Fitting a mixed effects model with time (pre/post) as a factor:

lmer(hemoglobin ~ time + (1|subject))

3.Using the variance components from (2) to conduct a simulation that involved repeatedly dividing individuals into hemoglobin based on pre values. Simulation is used to estimate the expected change due to regression to the mean.

sim <- function() {
  individuals <- rnorm(n, 0, sd_btw) 
  errors_pre <- rnorm(n, 0, sd_within)
  errors_post <- rnorm(n, 0, sd_within)
  pre <- individuals + errors_pre
  post <- individuals + errors_post
  ix <- order(pre)[1:(n/4)]
  mean(post[ix]) - mean(pre[ix])
}
mean(replicate(1000, sim()))

4.Obtain t-based CI after subtracting the estimated regression to the mean effect

Limitations that I see with this analysis:
1.Arbitrary discretization
2.Doesn’t incorporate uncertainty involving regression to the mean effect
3.Yes, effect is not causal

However, I think 1 and 2 are not too hard to fix: run the simulation as a linear regression rather than breaking into quartiles and then do

 lmer(hemoglobin ~ rcs(time,3) + (1|subject)) 

and take the difference between actual spline and line estimated from regression to the mean to judge “treatment effect” (not causal treatment effect). To fix the second issue, just make use of the distribution from the simulation rather than just using the mean.

Overall, I would argue that there is at least some value in getting rid of the regression to the mean effect even if you you have a design for which potentially everything is confounded. Benefits of this approach are as follows. Consider the model: yij = mu + alpha_i + beta_j + epsilon_ij

alpha is the pre/post effect, beta is the subject effect, epsilon is the error. When we focus on estimating the causal treatment effect for pre/post as a function of the pre values, there are two confounders that come into play: a) epsilons - only bc. we want treatment effect as a function of pre and b) systematic population-level confounders. Removing the known issue of regression to the mean allows someone to focus on the likely magnitude of population-level confounders and consider whether it is reasonable to make an argument that the known confounders will have effects substantially below the magnitude of the estimated pre/post effect (giving greater evidence of a causal effect).

Obviously not nearly the same ability to rule out confounders, but better than not addressing the magnitude for regression to the mean.

Hi, what if I was interested just to the change of a continous outcome between two time points, i.e. to assess if the difference between pre and post intervention in one sample is not zero? On the BBR notes I read I should first plot the pre-post values versus the pre values, to look for any trend. If this holds, could use I then the Wilcoxon signed rank test?

What is the goal of the analysis? If you are trying to make a causal statement such as “if we gave the treatment we’d get an xx% reduction in bad outcome more than had we not given the treatment” then you can give up. You do not have a design that supports that. If you are trying to say that “people change over time” without attributing any cause to that change then your data may be helpful.

Thanks for your answer. Given the design, no causal intention at all! I was just exploring the available strategies for such “change over time” analysis.

You need to have an analytic goal. Just exploring the data is not enough. What do you expect to learn from the exploration? Will you report it? How will people interpret the result?

2 Likes