Evidence Based Decision Making and Meta-Analysis -- An Information Theoretic Critique

Andrew Gelman posted an interesting review of a paper that documents the failure of international public health authorities throughout the pandemic.

Blockquote
It might sound silly to say that people are making major decisions based on binary summaries of statistical significance from seriously flawed randomized studies, but that seems to be what’s happening.

After much reading and lurking here, I’ve come to believe that @Sander is right, and statistics needs to be taught in the context of information theory. An information theoretic POV leads to the following observations on what Gelman refers to as “the standard norms of evidence”:

  • Modern applied sciences face new challenges that require greater attention to rigorous study
    design, data collection, processing, and integration than the guidelines of “Evidence Based Practice” currently provides.

  • Evidence based practice instruction provides misleading heuristics in the evaluation of information from primary studies that are magnified at the level of information synthesis (Meta-analysis). It can be shown that these heuristics to evaluate individual studies are contradictory with the placement of information fusion (meta-analysis) at the top of the (categorical) evidence hierarchy. These heuristics are rejected by all philosophies of statistics.

  • The evidence hierarchy has a charitable interpretation of the expected discriminant information
    obtained from the choice of research designs, if the categorical interpretation is rejected, but is not useful for the evaluation of information/evidence in specific cases.

  • An information theoretic approach to evidence leads directly to formal decision theoretic considerations. In a scientific context, maximization of utility is equivalent to maximizing information received via experiment. Meta-analysis is not a separate research design, but a computational procedure to combine available information in order to decide upon the conduct of future studies.

  • A decision theoretic approach to evidence leads towards broadly Bayesian perspective on scientific questions and their applications.

  • Formal cost/benefit considerations that include the value of information (ie. new studies) will improve the allocation of resources to questions of true importance to stakeholders.

  • The frequentist perspective remains valuable; it can be also be modeled as a Bayesian agent with
    skeptical or vague prior beliefs, or a cost function.

  • This dialectic among various Bayesians with different priors (or utility functions) leads to a spectrum of covering priors that present the contextual boundaries of principled, scholarly debate.

  • Constructive debate within this framework is the assertion and critique of regression models to explain and predict future observations.

  • Information theoretic considerations aid in the application and assessment of recent developments in machine learning and theoretical computing science.

  • It can also be shown that the (categorical) hierarchy of evidence undervalues observational evidence.

  • Observational studies can be vastly improved by paying attention to information theoretic considerations, up to including the use of directed acyclic graphs (DAGS) to model causal and confounding factors.

Related papers:

  1. Trisha Greengalagh on mental models and public health
  1. Harry Crane on naive probabilism:
  1. Philip Stark and Andrea Saltelli on Cargo Cult Statistics:
  1. Frank Harrell on Errors in the Medical Literature.
3 Likes

Thought I’d bump this with a few references (in no particular order) devoted to statistical decision theory applied to treatment selection.

Claxton, K., Lacey, L.F. and Walker, S.G. (2000), Selecting treatments: a decision theoretic approach. Journal of the Royal Statistical Society: Series A (Statistics in Society), 163: 211-225. https://doi.org/10.1111/1467-985X.00166

Blockquote
We adopt a Bayesian decision theoretic framework in which a utility function is introduced describing the consequences of making a particular decision when the true state of nature is expressed via an unknown parameter θ (this parameter denotes cost, effectiveness, etc.). The treatment providing the maximum posterior expected utility summarizes the decision rule, expectations taken over the posterior distribution of the parameter θ.

Watson, S.R. and Brown, R.V. (1978), The Valuation of Decision Analysis. Journal of the Royal Statistical Society: Series A (General), 141: 69-78. https://doi.org/10.2307/2344777

Blockquote
The most obvious motivation for using decision analysis (applied decision theory), to offset its cost, is to “improve the quality” of the subject’s decision. This paper explores approaches for quantifying this prospective value for a proposed analytic effort, bearing in mind that, like unaided decision making, it will fall short of perfect rationality. One valuation approach compares the subject’s expected utility with and without the proposed analysis. its formal properties are examined and a graphical implementation procedure is suggested. A second approach assesses the reduction in the expected cost of irrationality, a concept analogous to opportunity loss.

Sculpher, M., Claxton, K., & Akehurst, R. (2005). It’s just evaluation for decision making: recent developments in, and challenges for, cost-effectiveness research. Health policy and economics. Opportunities and challenges, 8-41. link

Blockquote
The paper argues, that although the methods of cost-effectiveness have progressed markedly over the last decade, these developments also emphasises how far the field still have to go. Two particular methods challenges are discussed which relate to the methods of constrained maximisation and developments and value of information methods.