Confidence limit index/ratio as an additional measure of precision?

In the article “Can Confidence Intervals be Interpreted” by Naimi and Whitcomb, the mention the confidence limit index (or difference), as a measure of precision.

I can’t recall coming across this exact idea before but at first glance, it appears quite appealing, as a ratio closer to 1 would indicate a more precise estimate, and a ratio farther from 1 would indicate less precision. Could this be a useful parameter to report alongside effect estimates and confidence intervals, as a standardized measure of precision that is comparable across different effect sizes? I can envision this as being useful as a means of ranking results in some situations, for instance when doing metabolomic studies with many different outcome variables with very variable concentration levels. Could this be an alternative to standardizing the effect sizes as % change or standardized mean difference, which has its own limitations?

I’ve seen @f2harrell talk about half confidence interval width as a parameter of the margin of error. Do these two concepts directly relate?

Please take a look at “The Fallacy of Placing Confidence in Confidence Intervals” by Morey et al… They talk about it under “Fallacy 2 (The Precision fallacy)”.

1 Like

I’m not sure there is much conflict between this and what is written in the paper linked at the top. They do state the problems with interpreting CI’s as a measure of precision, as well as the problems with interpreting the observed CI estimate as representing the CI estimated, etc. The ratio/difference is specifically mentioned as a summary measure of random variability, not as a measure of precision. I hindsight, this could probably be better communicated by me in the opening post.

The main message of the paper by Naimi et al. is in this sentence:
“the confidence interval width measures the degree of precision characterizing the point estimate of interest.” This is an incorrect statement.

I don’t think that is the main message at all. The context is to distinguish between what relates to the estimate and what relates to the estimand, and to not confuse the two. And underlining that the CI is a measure of precision, or rather the degree of variability, in the estimate, but not in the estimand or the true parameter.

I may be wrong, but I do think, in this context, that the statement you quote is not really incorrect, as the CI is a direct function of the standard error, which is the variance of the estimate. At least that is the definition given by @f2harrell in BBR:

The standard deviation of a summary statistic is called its standard error, which is
the √ of the variance of the estimate

That seems a bit harsh. What makes it so bad? Could you explain what you consider the main problems?