The Bayes-frequentist debate enters the terminology debate especially when people start doing uncritical promotion of one or the other, because as I wrote before the dichotomy is disconnected from the details. The claims you put forth not only ignore abundant frequentist criticisms, but also disregard the vast diversity and divergences of methods and “philosophies” (or narrow dogmatisms) falling under the headings of “frequentist” and “Bayesian”. Fisherian frequentism in its most likelihoodist form is for all practical purposes hardly distinguishable from reference Bayes, and both of those hardly resemble the extremes of Neymanian hypothesis testing or operational subjective Bayesianism. Furthermore, the latter extremes are united by many connections (e.g. admissibility of Bayes estimators).
I’ll continue to state that various claims of this or that being superior are misleading if they are not circumscribed by the purpose and context of the analysis. Claims that Bayesianisms (whatever their form) are uniformly desirable are like claiming sports cars are simply the best cars; fine if you are on the autobahn, but see how far that gets you on a rutted dirt road. Likewise, where a Bayesian method and a frequentist method diverge, it can be the Bayesian method which is the one leading us into disastrous error - that was noted by Good long ago even though he was philosophically Bayesian.
What some of us know now is that Bayes can break down without the prior being wrong in any obvious public sense - see the literature on sparse-data inference; as I’ve cited before, there is a lovely summary on page 635-6 of Ritov Bickel et al. JASA 2014. It’s all just a manifestation of the NFLP (No-Free-Lunch-Principle): Every advantage you list (when real) is bought by some disadvantage elsewhere.
So again, regarding terminology, both “frequentist” and “Bayesian” need to be reoriented to talking about methodologic tools, not philosophies or dogmas or dictates. When we do that we can find that we can use P-values to check Bayesian analyses, Bayesian posteriors to check compatibility intervals, etc. By doing both we can see when conventional P-values aren’t close to posterior probabilities and their corresponding compatibility intervals aren’t close to posterior intervals, because we have them both before us at once to compare. And we can then see why we should reject highly subjective posterior-evoking terms like “confidence interval” or “uncertainty interval” to refer to the range between level points on a P-curve, and why we should reorient frequentist terms and descriptions to emphasize the complementarity of frequentist and Bayesian tools.