Definition of statistics

I am not a statistician (veterinarian) but I have an interest in improving the quality of research in my field where statisticians are infrequently part of study design or analysis. When I try to convey to others the importance and value of statistics, I suggest that it cannot give you the answer to your question,but it can tell you how cautious or uncertain you should be. I realize this is not very well stated. On Twitter Miguel Hernan posted this definition from an award he recently received in Belgium. It seems like a very good way to explain statistics to non-statisticians. Does anyone have a definition which they routinely employ?

I think Herman Chernoff and Lincoln Moses defined statistics perfectly on the first page of Chapter 1 of their book Elementary Decision Theory:

Blockquote
…today’s statistician will say that statistics is concerned with decision making in the face of uncertainty.

Decision analysis is woefully neglected in basic stats. Why do we conduct research in medical science other than to make better decisions? It is much easier to test recall of the formula of the standard error than it is to discuss applications of stats to common problems. Even design of experiments can be expressed in decision theoretic terms.

3 Likes

Yes my interest in statistics stemmed from a course on decision making I took at Stanford with Professor Ron Howard. Quantitative methods are valuable in decision making but in the context of framing, alternatives, relevant information, values and tradeoffs and sound reasoning.

To my opinion, “decision making” is a strawman. Real decisions are based on statistics. They involve much more weighting the value of alternatives (I assume you don’t say that “rejecting a null hypothesis” really is a decision). Statistics provides tools to define what “information” is, and how much “relevant information” is provided by data.

Blockquote
To my opinion, “decision making” is a strawman. Real decisions are based on statistics

I don’t understand the objection. The decision theory I was thinking of is Bayesian, not the caricature of decision theory presented by the behavioristic Neyman-Pearson school.

“Bayesian decision making” it’s not very common in med research, as far as I can see. And it is also not very commonly meant in intro statistics books. But the objection remains, I’d say, also in this context. Instead of the likelihood you use a posterior distribution. This may incorporate some of the weights that decisions will eventually be based upon, but getting these weights and priors are just outside the realm of statistics, again. A Bayesian analysis certainly allows us to make probability statements about parameter values (the model and the priors must be given), or how we ought to change priors upon seeing data. This is a form of “learning from data”, as to refine our beliefs about parameter values in given models. That’s valuable, but that is not “decision making”. It may, under very special conditions, contribute to the process of making a decision. Decisions are eventually made outside of statistics, and even if you pack everything into the Baesian model, it’s just put on the selection of model and priors.

All the elements listed above are important parts of statistics but not the whole story. My attempt at a definition from fharrell.com/post/introduction:

Statistics is a field that is a science unto itself and that benefits all other fields and everyday life. What is unique about statistics is its proven tools for decision making in the face of uncertainty, understanding sources of variation and bias, and most importantly, statistical thinking . Statistical thinking is a different way of thinking that is part detective, skeptical, and involves alternate takes on a problem.

Experimental design and understanding properties of measures are key elements of statistics.

1 Like

Blockquote
Experimental design and understanding properties of measures are key elements of statistics.

I would agree, but I hasten to point out that design of experiments is fundamentally an application of the Bayesian decision theoretic machinery (in principle at least). I don’t comprehend this resistance to a broad conception of decision theory. Most objections to it hold up the simplistic Neyman-Pearson behavioristic approach as if that is the only conception around.

In economics, the game theoretic models start from the principle that “rational behavior” maximizes expected utility. Statistical decision theory borrows the formal apparatus of game theory and applies it to the problem Chernoff describes as “estimating the state of nature.” Bayesian Decision Theory also accepts maximization of expected utility as a valid objective. When applied to scientific projects, it amounts to maximizing the information from experiments. The decision theoretic approach is closely related to the information theory one.

Of course “the real world” is more complex, but these theories are a good first order approximation to actions worth considering.

Design of experiments has many non-purely statistical components that are in the domain of Statistics.

Blockquote
“Bayesian decision making” it’s not very common in med research, as far as I can see. And it is also not very commonly meant in intro statistics books

The fact that a decision theory perspective is mostly absent from medical research and intro stat books I see as an important cause of poor research conduct and misinterpretation of statistical methods.

I have not seen any rebuttal to the claim that the maximization of utility, when applied to the scientific research domain is simply the maximization of information to reliably estimate or predict the state of nature . That is how IJ Good conceptualized it, and I find much value to that perspective.

The maximization of information is implied by Harry Crane’s premise in The Fundamental Principle of Probability

Blockquote
If you assign a probability to an outcome happening, then you must be willing to accept a bet offered on the other side (that the outcome will not happen) at the correct implied odds.

Another excuse against the testing by betting philosophy is the use of gambling to model concepts. If recreational gambling is objectionable, consider the important issue of risk management in a modern economy – life insurance, health, pensions, etc. That involves calculation of probabilities of propositions occurring or not, their costs when they do occur, and the correct value an insured needs to pay to transfer that risk.

In the U.S, “value based healthcare” really means shifting economic risk onto providers. Medical stats is not prepared for this.

The notion of information as utility to be maximized in principled scientific research can be extended to the idea of markets for information explored by Robin Hanson a physicist who became an economist.

Blockquote
Information markets are institutions for accomplishing this task [of information aggregation] via trading in speculative markets, at least on topics where truth can be determined later. Relative to simple conversation, it is harder to say any one thing by market trading, and there are fewer things one can say. When people do not know whom to believe, however, the fact that a trader must put his money where his mouth is means that the things people do say in information markets can be more easily believed. Those who know little are encouraged to be quiet, and those who think they know more than they do learn this fact by losing.

Compare the notion of information markets to this description of scientific activity by statistician Paul Rosenbaum in his book Observational Studies.

Blockquote
Scientific evidence is commonly and properly greeted with objections, skepticism, and doubt. Some objections come from those who simply do not like the conclusions, but setting aside such unscientific reactions, responsible scientists are responsibly skeptical…This skepticism is itself scrutinized. Skepticism must be justified and defended. One needs “grounds for doubt” in Wittgenstein’s phrase. Grounds for doubt are themselves challenged. Objections bring forth counter-objections and more evidence. As time passes arguments on one side or the other become strained, fewer scientists are willing to offer them … In this way, questions are settled.

I find them to have much in common.

1 Like

Speaking of physicists, Jerry Friedman (PhD physics, later chair of Stanford’s Statistics Dept) gave (as we physicists are prone of doing) an operational definition of statistics:

…Statistics is being defined in terms of a set of tools, namely those currently being taught in our graduate programs. A few examples are: probability theory, real analysis, measure theory, asymptotics, decision theory, Markov chains, Martingales, Ergotic theory, etc. The field of statistics seems to be defined as the set of problems that can be successfully addressed with these and related tools.

I’ve found other definitions of statistics to be more aspirational, with the actual profession of statistics falling short in practice. For instance, in my 2021 paper I challenged definitions of statistics that involve “learning from data” and “informed decisions using data” because statistics is at best “a”, not “the” discipline for doing either of these things. My 2021 paper on Andrew Carnegie showed that there is a class of quantitative methodology for using data to make decisions that are not taught in modern Statistics Departments, but rather in courses in managerial accounting and production/operations management (Carnegie was a pioneer of such methods in the 19th century).

In the 19th century Carnegie was at home as a member of the American Statistical Association, but would not be today. It was about 100 years ago this year, when Fisher published “On the Mathematical Foundations of Theoretical Statistics” (1922) that the discipline began deepening and narrowing around probability as the lens through which data should be modeled. As a result, as Brown and Kass (2019) wrote,

“[t]he essential component that characterizes the discipline is the introduction of probability to describe variation in order to provide a good solution to a problem involving the reduction of data for a specified purpose. This is not the only thing that statisticians do or teach, but it is the part that identifies the way they think.

4 Likes

A simplified version of Von Neumann-Morgenstern decision theory (i.e.maximization of a utility function, given subjective/Bayesian beliefs about probability of the outcome) plays a significant role in medicine. See for example the Society for Medical Decision Making and their conferences and journals, or the Hunink et al textbook (Amazon.com) .

QALYs are an attempt to aggregate utility functions over different individuals; this requires additional assumptions relative to VNM (because the VNM utility function is only unique up to positive affine transformations), but it is still basically the same Bayesian decision theory framework, just with some added strong assumptions to make interpersonal comparisons possible. QALYs obviously play a significant role in how doctors think about decisions, and there is significant research on this. Of course, in practice the information comes to the doctors in a pre-processed form, they rarely have to think carefully about their priors

You are right that Bayesian decision theory is not well integrated with clinical research, i.e. with the data analysis. But can you really blame them for that? Clinical research is frequentist, and it would be very challenging to integrate Bayesian decision theory in any kind of principled way. Of course, you can argue that they should use Bayesian statistics, but this would be a major change, and I don’t think people would be willing to do it simply in order to make the connections with decision theory more explicit. The flamewar on Bayesian vs Frequentist inference is not at all as one-sided as some Bayesians like to claim, and there are valid reasons for preferring a frequentist framework.

I don’t think I’d call current medical stat practices “frequentist”; they are an incoherent mix of Neyman-Pearson behavioral decision theory and Fisher’s evidential theory.

As has been pointed out by @Sander_Greenland in a number of threads and other venues, admissible frequentist techniques (in the sense of not being dominated by another method) are in the complete class of Bayes rules. There is nothing wrong (and much to be gained) with taking a frequentist estimate, asserting a posterior, and computing the range of priors compatible with those results. @Robert_Matthews and Leonhard Held have adapted this technique from IJ Good and gave it a better name: Bayesian Analysis of Credibility.

Links to papers are in this thread:

My only adaptation is to extend this model with concepts from Game Theoretic Probability that Glen Shafer and Vladimir Vovk have explored. I would compute Skeptic and Advocate priors that gives a range of plausible prior probability distributions, leading us into the realm of imprecise probability and Robust Bayesian Analysis.

This has direct relevance to the design of experiments, in that we derive the experiment that narrows this region of probabilities, giving explicit information interpretations to the result as well.

David R. Bickel has done some similar work along these lines. In his model, a parameter K, on the scale 0-1 controls how much caution to place on the model, with numbers closer to 1 being more conservative (and closer to the frequentist procedure). I’d reverse the scale and re-interpret this a what Kelly fraction someone would be willing to bet at.

For the reasons stated in my last post, on my profiles I prefer to identify myself as a data professional, not a statistician. I am interested in the perspectives offered by all those who sincerely and successfully make sense of data, not just those for whom probability is the “central dogma” (to borrow a phrase from Kristin Lennox).

(relevant slide is at 3:35)

But I’d be curious about which tasks you engage in that are not under the field of statistics.

My goal is to be able to answer “none of my tasks fall under the field of statistics”. It may take a few years to achieve that (ie, a change of careers). I have a serious problem with a profession that defines itself so broadly and ambitiously (see original post) yet actually functions in a narrow and conceptually impoverished manner (see quotes by Friedman, Brown & Kass, and Lennox in my earlier posts).

1 Like

I prefer to think of statistics as what it can be. This is what motivates the way I practice statistics. And that’s why my primary role model for statistics is non-statistician Sherlock Holmes.

3 Likes

Well put, Frank! Sherlock Holmes reminds me of this quote.

“Data analysis has its major uses. They are detective work and guidance counseling. We should all act accordingly.” - John W. Tukey

Tukey 1969: Analyzing data: sanctification or detective work? American Psychologist, 24 (2): 83-91.

3 Likes