Setting the important but nettlesome issues in the mathematical logic aside, the combination of terms “type” and “error” created an essential misdirection, if not just a fundamental mistake. The persistent use of the term betrays the general human impulse to coerce a spectrum of uncertainty into a categorical framework. Language matters, and the terms “type" & "error” imposed a completely different (categorical) construct that stuck. Why this is so is explained to my satisfaction by Behavioral Economics.

Human do this sort of thing routinely. When confronted with a perplexing problem, we relieve our aversion to ambiguity and enjoy cognitive ease by unwittingly answering a substitute, simpler or more tractable question. Instead of estimating the probability of a certain complex outcome we subconsciously substitute an estimate of another, less complex or more accessible outcome. We avoid and never get around to answering the harder question.

In designing and conducting experiments we are updating our uncertainty with data and information. But as humans, we are uncomfortable with, and inept at, innate probability calculus; and with our bias toward ambiguity relief, we were seduced too easily into Fisher’s finesse and Neyman’s beguiling categorical elaboration of that.

I am not sure that type I error can be rehabilitated with a reconceptualization or a refined definition. Perhaps the greater good is served by explicating for those unconvinced the merits of the Bayesian perspective of rigorous quantitative reallocation of uncertainty with new information.

I hope this perspective is helpful.