Dear Luis,
thank you for your valuable contribution. I surely have some need of clarification here. I think my point is valid and important, but I may be wrong and willing to learn. So let me respond:
I think your main point is that Fisher worked on the “Type I error” concept, wheras I said that there isnothinglikea “TypeI error” in thelogicof Fisherian tests of significiance.
Myoint is to stress that the calculation of the statistical significance (p-value) is actually not related to any kindof “error”. At no time there is the question if H0 would be wrongly rejected. H0 is known to be wrong even before collecting the data. The p-value is calculated as a standardized measure to see if there is already enough data to “make it obvious” that the data is incompatible with H0, allowing us to interpret the data relative to H0 (in a simple t-test this would mean that we may interpret the sign of (meanA-meanB - H0)). Without using the p-value, we could interpret any result (mean difference), and in the worst case we would get the sign correct with a probability of 50%. Having the wrong sign ist what Andrew Gelmen calls a “type-S error” (S for sign). Using some p-cut off, we won’t dare to interpret unclear data, and for the rest (the clear data) there is alower probability of type-S errors. The only other “error” that could happen is the failure to reject H0, what means that there is not enough data even for such a minimalistic interpretation (the data is inconclusive w.r.t. H0).
This error is not a type-I error. By definition, a type-I error is to accept HB given HA is actually true. In Fisher’s significance tests, the only hypothesis specified is H0 (in NHST unfortuately just taken to play the role of HA, and the alternative is not concrete; it’s jut “not H0”, what is not a well-specified alternative that could play the roleof HB!). So even if H0 is just taken to be HA, there is no HB, and rejecting H0 is not interpretable as “accepting HB” (again: there is no HB).
This is important because deciding for an acceptable type-I error needs a loss-function, and such a loss-function can be defined only with reference to a concrete alternative (HB). So there is no type-I error without HB and without a type-II error.
I think the confusion that “fail to reject H0” would be a type-I error is a great source of misconceptions about tests in general, like the inherent, often indirect “acceptance of H0” when H0 cannot be rejected or the mere “acceptance of ‘not H0’” without considering the “sign” (see above).
The confusion was started with the NHST hybrid approach. If we’d stop teaching thisand instead teach only significance tests, there is no place to mention type-I/II errors, only type-S and type-M errors, that are more relevant. Neyman/Pearson tests (AB-tests) can be adressed in specialized courses and for problems where one can define a sensible loss-function (I don’t see it in research, however). Students would come back to the relevant questions and don’t take a test for a “proof of an effect” (or, even worst, for a proof of the absence of an effect).
Please let me know where I am wrong.