I, also, am very much not a fan of the fragility index, and have jousted with folks about this on Twitter before. I think this short piece in JAMA Surgery covers my objections fairly well.
https://jamanetwork.com/journals/jamasurgery/fullarticle/2730082
For one, the FI is just another flipped-around version of a p-value.
For two, rejection (or skepticism) of results based on the FI is at odds with the balance any RCT must strike: recruiting the minimal number of patients required to answer the study question (with whatever operating characteristics the trial is designed to have). I said this once on Twitter, but if people have a problem with “fragile” results based on the p<0.05 cutoff, they’re basically saying that results at p<0.05 aren’t good enough (which might be a fair discussion!) but that instead of calculating FI, they ought to be advocating for lower alpha levels (or Bayesian approaches with a very high threshold of certainty) rather than just pointing out “fragility” of published results.
As for Paul’s point, I’m not sure that statisticians “readily acquiesce” to things like this so much as we aren’t even at the table when they’re conceived or published (same thing as our recent post-hoc power debacle shows). As far as I can tell from my discussions on Twitter, this has become popular with a certain profile of clinician: the hardcore skeptic, whose default position is basically “trust nothing” and therefore loves to find any reason possible to reject nearly any positive trial’s results. Plus, now it’s an easy way to get papers published by doing quick “systematic reviews” and shoving out a paper about how ‘fragile’ most published trials are. All we can do is write letters like Acuna et al did, but there’s only so much excess energy I have for fights about these things (see: post-hoc power debate) when I also have real teaching and work to do.