And with plot.predict with CIs out of range

and with plot.summary…

You can edit earlier posts without adding multiple replies.

See https://hbiostat.org/fh/contacts.html for information about how to produce a minimal self-contained working example that I can use in debugging.

Ok, done …

Please put at https://github.com/harrelfe/rmsb/issues and let’s only deal with one issue at a time, resolving that issue before posting the next issue.

Ok, I also wanted to ask if it is the correct way to specify the CPPO model when the term that does not meet the PO assumption (Age) goes in an interaction cir * Age (cir is type of surgery, see code above). Also if the splines are well specified in the formula

I haven’t test that but I’m fairly certain that this should work:

```
blrm(y ~ x1 * x2 + x3, ~ x1 * x2, cppo=function(y) y >= 3)
```

The departure from PO should be applied to all main effect and interaction terms for x1 and x2.

If the ordinal end point has many levels, an additional topic suggested but not specified in the 1990 paper is how to model non-linear relationships between coefficients for constrained terms. For example, how to code an upwardly concave parabola, or a downward or upward curve. Also what to do if there are outliers for some cut-off points that may influence the gamma parameter. Does it make sense to represent the coefficients of a multinomial model vs. the levels of the ordinal variable to make the regression diagnosis and decide the best fit?

The constraint can be of any shape you specify and will use whatever shapes you impose. If you need to estimate the shape of the constrain you’ll need things like monotone splines in Y interacting with covariates, which is not implemented in the function.

For example, here I fit a multinomial regression.

```
library(nnet)
test <- multinom(HE6 ~ cir * rcs(Age,3) +linf+ RR_PAC5 + Esquema2+Estadio3+ pol(EORTC), data=DAT)
```

After that I have represented the coefficients of one of the variables against the cut-off points of Y.

```
coefi<-data.frame(coef(test)[,2:14])
coefi2<-gather(coefi, key = "variable",value = "coefficient",1:12)
coefi2$y_threshold<-rep(seq(1,12,1),12)
num<-coefi2$variable=="cir1.rcs.Age..3.Age"
coefi3<-subset (coefi2, num)
fit <- lm(coefficient ~ I(y_threshold) + I(y_threshold^2 ), data = coefi3)
summary(fit)
prd <- data.frame(y_threshold =1:12)
err <- predict(fit, newdata = prd, se.fit = TRUE)
prd$lci <- err$fit - 1.96 * err$se.fit
prd$fit <- err$fit
prd$uci <- err$fit + 1.96 * err$se.fit
ggplot(prd, aes(x = y_threshold, y = fit)) +
theme_bw() +
geom_line() +
geom_smooth(aes(ymin = lci, ymax = uci), stat = "identity") +
geom_point(data = coefi3, aes(x = y_threshold, y = coefficient))
```

A quadratic model is reasonable, and seems better than a linear model. How would this model be coded within the cppo function of the rmsb package?

If you really believed what the graph is estimating you could code `cppo=function(y) abs(y - nadir of graph)`

Thank you very much, Professor Harrell, for your answer. I have already finished the article to predict the quality of life in women with breast cancer with the rmbs package, pending to polish these details and the code issue that doesn’t work. Regarding the cppo function, it is not very clear why a quadratic model Y= level * level^2 corresponds to cppo= function(y) abs (y - nadir y). Since this part is the key that distinguishes the Peterson-Harrell model from other options, I think it would be very useful for the rest of the users to clarify this point. For example, perhaps it could be better explained the code for monotonic increasing or decreasing functions, non-monotonic functions…, or even the way to use splines within cppo.

I was using absolute values because the graph looked more like that than quadratic to me. You could have used (y - y~ \text{nadi}r)^2, There are two partial proportional odds models in Peterson & Harrell (1990): the unconstrained one which is more flexible as you are implying you want to have, but takes many more parameters, and the constrained one which is a restricted one parameter per covariate version.

Whether you use subjective visual checks or parameterized models, the degrees of freedom that you choose to give to a part of a model is part of the variance-bias tradeoff. This tradeoff is more tricky with smaller sample sizes because the width of credible intervals increases with more parameters and Bayesian power can be lost by not concentrating effects into fewer parameters. For the sample sizes we often encountered in human outcome studies (including clinical trials) the constrained partial PO model is a good compromise (in the above example perhaps with a linear constraint). You can think about it this way: many analysts don’t worry about the PO assumption and so they don’t fit exceptions to PO in their modeling. You are allowing for at least a first-order exception on the other hand.

The Bayesian equivalent of the likelihood ratio test is very interesting in this constrained ordinal model. Since continuous variables deviate most from the PO assumption, it will often be necessary to evaluate the impact of interactions between a binary factor and splines. This is very important clinically, for example, to assess whether the effect of an intervention differs according to age, on a continuous basis.

I wanted to ask how to interpret a comparison of two models using compareBmods.

For example, here I have two models, the first one contemplates an interaction with a spline, the second one is an additive model. Can you please explain the interpretation of the result?

```
> bs_neuro3 <- blrm(HE6 ~ neuro2*rcs(Age,3) + stage + pol(EORTC), ~ neuro2*rcs(Age,3)+pol(EORTC),
cppo=function(y) , data=s)
> bs_neuro4 <- blrm(HE6 ~ neuro2+rcs(Age,3) + stage + pol(EORTC), ~ rcs(Age,3)+pol(EORTC),
cppo=function(y) , data=s)
> compareBmods(bs_neuro3,bs_neuro4)
Method: stacking
------
weight
model1 0.175
model2 0.825
```

Not sure why you thought continuous variables would have more problem with PO.

The interpretation is best covered in McElreath’s book. I think of model 2 as being 0.825 likely to be the better of the two models in fitting the data generating process.

Thank you for the book recommendation. The book by McElreath seems very interesting to read in its entirety, not just for that particular topic. I don’t know why I assumed that for continuous predictors the violation of PO is more frequent. When splines are used, since the curve is more complex, it seemed to me that it would have to fail more times. Has this been simulated to see if this is the case?

The simulation result would do nothing more than reflect the true model that was used to simulate its data.

Can’t you simulate random continuous versus binary variables and test whether there are more deviations from the PO in one case or the other? My guess is that when you use quadratic terms or splines, the PO fails much more.