10-fold cross-validation repeated 100 times is an excellent competitor of the Efron-Gong optimism bootstrap, and works even in extreme cases where N < p unlike the boostrap. Bootstrap and CV have exactly the same goals and recommended uses. For non-extreme cases it is faster to do the bootstrap, and the bootstrap has the advantage of officially validating building building with sample size N instead of \frac{9}{10}N. The 0.632 bootstrap was shown to be better only for discontinuous accuracy scoring rules.
Whether using bootstrap or CV, it is imperative that all supervised learning steps be repeated afresh when validating the model. Any analysis that utilized Y including any association-with-Y-based feature selection must be repeated afresh. Internal validation must be rigorous.