How to compare 2 nonlinear curves?

So I am analyzing what is essentially a dose response experiment (except the dose x axis is actually temperature and the y axis is intensity) for some protein degredation studies. There are many proteins to analyze.

We have a nonlinear parametric form, and its supposed to be monotonically decreasing in an S shape-not exactly sigmoid logistic shape but something like that.

There are 2 curves with and without treatment. So to start off—

  1. How do I compare the 2 nonlinear curves and see if there is a “treatment effect” on the overall curve? We don’t care where the effect is, just that the curves are “statistically different”. This is the simple situation ignoring everything below, and I need to do nls() but somehow incorporate the treatment indicator modifying the terms.

Now for the additional complexities:

Well the issue is even though its an experiment-it was done for a certain fixed range of x for all the proteins, and the problem is the “full S shape” does not necessarily get sampled for all, it might only be the top half or less. This means nls() will not converge for a whole bunch of the proteins and the parametric form will not work. We would still like to compare the shape. In this case, I think we would need to use splines in mgcv::gam() to approximate it. Also is there any way to regularize the spline to monotonically decrease?

  1. Using GAMs, how do I verify if there is a treatment effect between the treatment and control curves and get a p value?

If this wasn’t enough, we also have random effects involved. Each overall control and treatment curve actually consists of a bunch of “sub-curves” due to effects on different fragments of the same protein.

  1. How to incorporate random effects into the above and then compare the 2 curves?

Additionally, eventually this entire procedure needs to be automated and made into a pipeline, we cannot manually examine plots to check convergence problems with nls() and there are guaranteed to be such problems in some cases. In this case, is it simpler just to go with gam() as the “generalizable” scalable solution? Or attempt nls() and if it does not converge do gam()?

For the first go, we will probably ignore random effects just to get something that works but eventually REs are needed.

You might look into the R package ‘scam’, which supports monotonicity constraints and is based on mgcv. mgcv itself also has some functions buried in it for native support like this one but from what I’ve seen it requires a bit more “hacking” to get to work.

For #2 (p-values for treatment effects) it sounds like you want a factor-smooth interaction. The mgcv documentation has some info here (see the “by” argument), and Gavin Simpson has a great writeup here.

As for random effects for curves in mgcv, this paper has a nice review of the options in mgcv, from simple random intercepts to curve-specific random smooths + a “global” smooth.

There are probably other (possibly better) options too—I’m interested to hear what others recommend, since I bump into similar problems fairly often.

2 Likes

In addition to John’s excellent suggestions sometimes it works well to fit spline functions interacted with group and to do a “chunk test” to assess whether the curves differ in vertical location, slope, or shape, i.e., do the curves differ at any x coordinate.

Thanks! That scam package looks like what I’m looking for and seems to return a regular gam object which is good.

That paper also seems really informative. I’m actually wondering about using a global smooth+random effects intercept (bs=“re”) which is model G there vs. something like model GI in that paper which makes the random effects themselves also splines (like a smooth per unit).

The latter uses up way more degrees of freedom, and is computationally slow, so I’m wondering how problematic are misspecified random effects, if the interest is in the global curves? Like if the random effect sub-curves (with just an RE intercept) don’t fit great, but the overall curves are fine, will inference still be biased? We have to make statistical and computational/stability tradeoffs since this is going to be part of a bioinformatics pipeline. The REs are just a “nuisance” and those curves are not of specific interest.

It looks like the GAM returns s(x):treatment and s(x):control, so there are 2 p values. In terms of testing for a treatment effect across the whole curve is an ANOVA wrt a null model that only has s(x) the way to go? So (for now ignoring random effects):

library(mgcv)

model_null = gam(y ~ s(x),data=data)
model = gam(y ~ s(x,by=treatment),data=data)
anova(model,model_null,test="F")

Would the p-values from this F-test be valid in terms of testing whether there is a treatment effect across the curve for this experiment? Is this the same as a “chunk test” to see if they differ at any x-coordinate?

I also see in the mgcv documentation here summary.gam: Summary for a GAM fit in mgcv: Mixed GAM Computation Vehicle with Automatic Smoothness Estimation that “the p-values are approximate and neglect smoothing parameter uncertainty. They are likely to be somewhat too low when smoothing parameter estimates are highly uncertain”. Though I’m not sure how much this practically matters, especially for a simpler 1D experimental situation like this with enough data, along with the monotone constraints to help prevent overfitting further. Would the Type I errors generally still be OK or is there some sort of calibration that has to be done (like say by permuting the labels–though with random effects that already gets too complex)?

They probably use a cross-validation method to select the amount of smoothing and without another cross-validation look you’ll get confidence bands that are too narrow.

I tried the ANOVA method and it seems like a big issue with some datasets (this is going to have to be in a pipeline) we have is that it fails to return an F stat/p-value when due to the smoothing, somehow the model with the treatment ends up with less df than the model without. The help for anova.gam describes this issue too.

Also turned out that in the model above I needed a main effect treatment term too so it becomes scam(y~s(x,by=treatment)+treatment) otherwise it was fitting some weird curves.

Since ANOVA isn’t working, also considering the “causal inference” approach of average marginal effects/G comp to get the ATE, although the issue is the treatment effect is not constant with x. X is not a confounder, but is an automatic effect modifier by the nature of a nonlinear dose response curve.

But it seems like at least marginaleffects() returns a p value and the causal inf approaches don’t rely on comparing the df between models. We could also obtain separate conditional on x effects but there is multiple testing then depending on how big the grid is.

Did some label permutation simulations (at the same exact x values between control and treatment, we also have some independent replicates for each in this case) on a dataset that didn’t have random effects involved, and it seems like the p values for ANOVA have Type I error rates near 10-15% for this dataset while marginal effects/ATE p value is more in the 5-8% range despite the cross validation