Seems correct. A CI [a,b] means: “IF the parameter would be between a and b, THEN these data would not be significantly different”.
Or more simple: “if the real value would be somewhere around zero, then this sample would be quite normal and to be expected. But we have no idea what the real population value is, because all we have is this sample.”
This in contrast to bayesian logic: “based on previous knowlegde and logic, our best estimation of the population was … Now we have new data, which allows us to update our estimation”. Here the sample doesn’t contain more info neither, but we combine it with other information.
Doesn’t every doctor use background information? The same signals (a cough, a pain somewhere, some blood test results) will lead to different conclusions depending on the patient’s history.
We have no algebra that shows us how to incorporate other information into frequentist results
When investigators and readers see p < 0.05 (or p > 0.05) then start to think dichotomously about the evidence and if p < 0.05 they even go so far as to believe the point estimate of effect is the true population value. They are shedding background information at this point.
Not everything is algebra. All science brings in background info and manages to do so without insisting it be in the form of a probability distribution. I think more attention to non-statistical science and life is what’s needed to use frequentist error statistical methods appropriately.
As an outsider to these domains, I do think some statistical methods compound the potential for errors. That’s why I think that the Open Science Framework has been a gamechanger.
I often quip that some subsets of diagnosticians will simply have more gifts in discerning the appropriate errors, provided also that they have few if any conflicts of interests that bear directly on a query at hand.