In the article “Can Confidence Intervals be Interpreted” by Naimi and Whitcomb, the mention the confidence limit index (or difference), as a measure of precision.
I can’t recall coming across this exact idea before but at first glance, it appears quite appealing, as a ratio closer to 1 would indicate a more precise estimate, and a ratio farther from 1 would indicate less precision. Could this be a useful parameter to report alongside effect estimates and confidence intervals, as a standardized measure of precision that is comparable across different effect sizes? I can envision this as being useful as a means of ranking results in some situations, for instance when doing metabolomic studies with many different outcome variables with very variable concentration levels. Could this be an alternative to standardizing the effect sizes as % change or standardized mean difference, which has its own limitations?
I’ve seen @f2harrell talk about half confidence interval width as a parameter of the margin of error. Do these two concepts directly relate?