With out a doubt, fixed level testing at the magical levels of 0.1, 0.05, 0.025, or 0.01 should be relegated to the historical dustbin.
That doesn’t imply any “ban” on testing, however.
It is responsible for the litany of p-value fallacies we see in the scientific journals. It is also responsible for a vast literature on the so-called “Jeffreys-Lindley” paradox. There is no paradox if a declines at a suitable rate as 1-\beta increases.
I’m exploring the relationship of this adaptive \alpha to the Bayesian rejection ratio described in this paper I’ve posted on a number of threads.
Making explicit the important distinction between Fisher’s p value and Neyman’s \alpha would go a long way to improving both the reporting of results as well as the interpretation.
Related to regression: I’d hope that clarifying the important distinction between \alpha and p would lead to understanding that leaving out terms in a regression model based on a decision rule interpretation of p-values is irrational from an information theoretic and Bayesian POV.