I’m trying to develop a classification model with >70 variables and training set of only 500 (case prevalence 10%). I’ve used cross-validation before to choose the penalty (lamba) for lasso, similar to what’s described here. But often there were too few variables selected. It was suggested to me that elastic net is worth trying.
But now there are two tuning parameters [alpha (mixture of lasso/ridge) and lamba]. How do I choose both at the same time, are there any recommended steps?
For the first time (!) @f2harrell 's BBR did not have the answer for me.