Significance tests, p-values, and falsificationism

I think the questions being discussed are now more fruitfully addressed using the techniques of computational theory as Paul Thagard discussed in A Computational Philosophy of Science. It is also discussed in this link on computational epistomology.

If we translate “falsification” to “computable” the “scientific method” becomes a decision procedure (aka. algorithm) that can be examined rigorously with mathematical methods. I like to think of “computational theory” as simply applied logic.

This is closely related to Sander’s attempt to bring the power of modern mathematical logic to the questions of statistical methods in his posts above.

In his introductory text Mathematical Logic, Stephen Cole Kleene, addresses the paradox of using logic to study logic. The technique used is to compartmentalize the formal system being studied (the object language), from the logic used to study it (observer langauge or meta-language).

In statistics, there are at least 3 formal languages that can be used to discuss “scientific methods” (or “learning algorithms” if you prefer the computational philosophy):

  1. Bayesian Theory
  2. Frequentist Theory
  3. Information Theory.

Some of the most important results in statistics are mapping the formal languages of Bayesian Theory to those of Frequentist theory (and vice versa).

Waiting exploration is to use information theory as the meta-language to explore questions regarding the mappings between Bayesian and Frequentist theory.

4 Likes