I think the key lies in the last line from that code block. Although x1-x12 are generated equal, their association with Y is not as the simulated magnitude of the association between xn and Y increases from x1 to x12.
require(rms)
n <- 300
set.seed (1)
d <- data.frame(x1=runif(n), x2=runif(n), x3=runif(n), x4=runif(n),x5=runif(n), x6=runif(n), x7=runif(n), x8=runif(n),x9=runif(n), x10=runif(n), x11=runif(n), x12=runif(n))
d$y <- with(d, 1*x1 + 2*x2 + 3*x3 + 4*x4 + 5*x5 + 6*x6 + 7*x7 +8*x8 + 9*x9 + 10*x10 + 11*x11 + 12*x12 + 9*rnorm(n))
Fitting a model to these data, I think you would expect the higher xn variables to rank higher as they explain more of the variance in the outcome.
Maybe you would be interested in the example in this topic where we discussed the same method in the context of evaluating the added value/selection of predictors. I tried it on my own (real) dataset there.