Frank suggested that the posts on the topic “What are credible priors and what are skeptical priors?” are getting into a more general open-ended area of philosophy and practice that deserves its own topic page.
These posts would reflect personal statistical principles and guidelines - the hope is that others will share theirs and be open to discussing them.
As a start on what constitutes applied statistics (as opposed to high theory or math stat), here’s an incredible quote Frank has posted elsewhere:
“The statistician must be instinctively and primarily a logician and a scientist in the broader sense, and only secondarily a user of the specialized statistical techniques. In considering the refinements and modifications of the scientific method which particularly apply to the work of the statistician, the first point to be emphasized is that the statistician is always dealing with probabilities and degrees of uncertainty. He is, in effect, a Sherlock Holmes of figures, who must work mainly, or wholly, from circumstantial evidence.” - Malcolm C. Rorty: “Statistics and the Scientific Method,” JASA 26, 1-10, 1931.
– Note well: That’s from 1931 and thus written around 1930, just before intense fighting broke out among between traditional “objective” Bayesians (then called “inverse probability”), Fisherians, and decision-theoretic (behavioral Neyman-Pearsonian) statisticians. As I understand the history, as of 1930 the most prominent split was still between such theoreticians and the old-guard descriptive statisticians. Based on the Rorty quote, one could argue that the triumph of theoreticians over data describers may have set principles of applied statistics back a good 80 years (at least principles for dealing with anything other than perfect surveys and perfect experiments).
Regarding the Sherlock Holmes aspect of inference, Edward Leamer gives a wonderful account in his classic 1978 book on modeling, Specification Searches , available as a free download at
http://www.anderson.ucla.edu/faculty/edward.leamer/books/specification_searches/SpecificationSearches.pdf
or
http://www.anderson.ucla.edu/faculty/edward.leamer/books/specification_searches/specification_searches.htm
Getting to an applied statistics principle: One which has been around for several generations is that dichotomization of results into “significant” or “not significant” bins is damaging to inference and judgment.
Yet, despite decades of laments about dichotomized significance testing of null hypotheses (NHST), the practice continues to dominate many teaching materials, reviews, and research articles.
Apparently, dichotomization appeals to some sort of innate cogntive tendency to simplify graded scales (regardless of damage) and reach firm conclusions, a tendency called “black-and-white thinking” in psychology. Stephen Senn labeled that tendency “dichotomania” in the context of covariate modeling; the pathology is even more intense in use of P-values: Important information is thrown away by dichotomization, and in the testing case the dichtomization further generates overconfident claims of having “shown an effect” because p<0.05 and claims of “showing no effect” because p>0.05.
That brings me back to Dan Scharfstein’s query on what to do about journals and coauthors obsessed with significance testing. What I’ve been doing and teaching for 40 years now is reporting the CI and precise P-value and never using the word “significant” in scientific reports. When I get a paper to edit I delete all occurrences of “significant” and replace all occurrences of inequalities like “(P<0.05)” with “(P=p)” where p is whatever the P-value is (e.g., 0.03), unless p is so small that it’s beyond the numeric precision of the approximation used to get it (which means we may end up with “P<0.0001”). And of course I include or request interval estimates for the measures under study.
Only once in 40 years and about 200 reports have I had to remove my name from a paper because the authors or editors would not go along with this type of editing. And in all those battles I did not even have the 2016 ASA Statement and its Supplement 1 to back me up! Although I did supply recalcitrant coauthors and editors copies of articles advising display and focus on precise P-values. One strategy I’ve since come up with to deal with those hooked on “the crack pipe of significance testing” (as Poole once put it) is to add alongside every p value for the null a p value for a relevant alternative, so that for example their “estimated OR=1.7 (p=0.06, indicating no significant effect)” would become “estimated OR=1.7 (p=0.06 for OR=1, p=0.20 for OR=2, indicating inconclusive results).” So far every time they cave to showing just the CI in parens instead, with no “significance” comment.