2019 Nobel Prize in Economic Sciences is about limitations of RCT

This article summarizes the 2019 Nobel Price in Economics" What randomisation can and cannot do: The 2019 Nobel Prize

Their critique of randomized controlled trials (RCT) is three-fold:

  1. They make no specific claim about external validity.
  2. In economics it requires a pre-existing system the study is embedded within. In a plumbing metaphor, the studies focus on problems in existing plumbing–not upon how to engineer and install a system.
  3. Optimal experimental design is generally not an RCT.

From a statistical point of view, #1 is drilled into practitioners. For #2 in bio-statistics, we’re not in the business of architecting a new human so it’s not that relevant. But for claim #3, I’m not familiar with this.

Poking at it further the article says, “When sample size is low, the optimal study is deterministic, not randomised”. Part of the justification for this is that what they study is a network of interactions and it’s difficult to control for “spillover” between subjects. A Theory of Experimenters

I feel like simulation study is in order. Thoughts? Discussion?


I posted a host of references on this very topic here.

I got interested in this issue because, in my field of rehabilitation, we are almost certainly never going to be in a position to do a reasonably credible RCT. You can find more information about this by searching for “minimized clinical trial.”

This is a challenging philosophical issue, but it might be a bit more accurate to say RCTs might be “overvalued” in certain contexts, relative to the information they provide.

For drugs and devices, they have the best justification. For certain “softer” interventions, I think other methods should be explored.

Randomized, Controlled Trials, Observational Studies, and the Hierarchy of Research Designs (link)

I’m looking forward to input from people who have actual experience in this area. AFAICT, minimized designs account for a tiny fraction (less than 3%) of all controlled trials. In areas where sample sizes are condemned to be “small” (ie. less than 200 per arm) I think minimization is worth consideration.


That thread is great. The main paper that led to the award is really a bundle of the issues discussed in that thread. He models decision making as part of the process. I just fail to grasp how this can lead to a Nobel prize when there’s already so much discussion on the topic already. I.e., what is the big breakthrough in this work to the known RCT small sample size issues?

1 Like

I haven’t read the whole paper you posted, but the Deaton and Cartwright paper from 2018 discusses the issues of interdisciplinary understanding of RCTs:

Our arguments are intended not only for those who are innocent of the technicalities of causal inference but also aim to offer something to those who are well versed with the field. Most of what is in the paper is known to someone in some subject. But what epidemiology knows is not what is known by economics, or political science, or sociology, or philosophy—and the reverse. The literature on RCTs in these areas are overlapping but often quite different; each uses its own language and different understandings and misunderstandings characterize different fields and different kinds of projects. We highlight issues arising across a range of disciplines where we have observed misunderstanding among serious researchers and research users, even if not shared by all experts in those fields. Although we aim for a broad cross-disciplinary perspective, we will, given our own disciplinary backgrounds, be most at home with how these issues arise in economics and how they have been treated by philosophers.

FWIW – Angus Deaton is an economist; Nancy Cartwright is a philosopher of science with a mathematical background.

1 Like

i scanned it quickly. it’s full of non-sequiturs, “Donors love RCTs, as they help select the right projects”, and im sceptical when people who find X arduous/costly rationalise thir way to the conlusion that is X superfluous, or not appropriate ‘in this circumstance’. And economists are inclined to use the “complexity of data” argument to dismiss a certain approach, but they then use the most basic stats tool available.


by minimised rct you mena minimisation? the ema guidelines were explicitly against minimisation when stephen senn was arguing for it (in SiM i think). I don’t remember where that discussion landed. We were using it for all out clinical trials in Leeds CTU in early 2000s

1 Like

I was referring to algorithms published by Donald R Taves, Stuart Pocock, and Richard Simon, or others who have improved on them, that purposely minimize differences between treated and controlled groups on prognostic factors. This literature goes back to the 1970’s, before we had the computing power to actually implement them.

A more recent paper by Taves is here:

Can you elaborate on what you mean by “mena” or “MENA”? This is likely a gap in my background knowledge. My best guess is that it is an acronym for “Middle East/North Africa.” That is what comes up when I search for it specifically in the context of medical research.

If you look at the links in my other thread, Stephen Senn has published writings that condemn minimization. I greatly respect Senn’s scholarship and expertise in this area, but in the context of my field, I think minimization deserves much more attention. I suspect he has very good practical reasons for preferring randomization in the context of the research projects he has been involved in; those fields condemned to “small sample” research are better served by minimized designs, if we take formal decision theory seriously.

I think a number of Senn’s concerns could be addressed with sophisticated, adaptive research programs that use a diversity of designs (minimized, adaptive sequential, observational) within a formal decision theory framework that optimizes for the economic/practical value of the information.

mena was a typo :slight_smile: ie “by minimised rct you mean minimisation?”

i always used senn to support our use of minimsation ie i didn’t think he argued against it. In this SiM paper (which i cannot access right now) i’m sure he says something like: there is no good reason not to use atkinson’s appraoch: SiM paper But im trying to remember something i read over 15 yrs ago. I thought he argued sharply against ema who issued a ‘points to consider’ doc saying do not use methods such as minimsation. I assumed senn’s influence had them reconsider this position. I agree that it should be used more often

edit: i have linked to the same paper you linked to. There must be a paper in the early 2000s from senn supporting aitkinson’s approach, i definitely read such a paper at the time. i can’t access anything, maybe this is it https://www.jstor.org/stable/2680966?seq=1#page_scan_tab_contents

1 Like

RE: mean minimization. That makes more sense! The papers I’ve read seem to assume the mean is the best measure of central tendency, but I don’t see any reason why other measures (median, HL estimate, etc.) could not be minimized now with modern software and algorithms.

Here are some slides from a presentation I’ve found by Senn on minimization:
Why I hate minimiation (Senn 2008).

“why i hate minimsation” - i do enjoy his clarity! His book however concludes minimsation is a “genuine unresolved issue”. ill look for the reference in the office tomorrow, im sure it exists, maybe in a letter.


I’m thinking it might be this:
Senn, S. (2004) Controversies concerning randomization and additivity in clinical trials (link)

1 Like

i searched, the only thing close to what i can remember is the following from his book: “given that it is trivial with modern computing power to implement this algorithm [atkinson], there seems no point in using classical minimsation”, and i guess the stat guideline i was thinking of was ich e9: “Deterministic dynamic allocation procedures should be avoided …”

i have to move onto other things but i know the quote exists somewhere…

1 Like

I have read through the paper referenced in the Nobel prize, “A Theory of Experimenters”. I am having trouble with the central premise.

It begins by assuming that experimenters are Bayesian. Not in the method sense (but yes it does build on that), but in the sense that we all hold prior beliefs before an experiment. Then based on the evidence these decision makers decide. An experimenter has his own prior and there exists an audience of skeptical Bayesians. A parameter (lambda) measures how much the experimenter cares about his own decision or the external group. Then the claim is the optimal design changes depending upon the lambda parameter!? RCT optimality is determined by higher N values and skeptical external audiences.

This is where I have issue. What if I’m the skeptic? Why is external skepticism stronger than my prior assumed? I’m of the opinion I want the strongest evidence I can get for my investment. Why should what I or others believe before hand influence the design choice?

1 Like

This POV is the common one in discussions regarding the philosophy or foundations of statistics.
The model where an external skeptic has a more challenging prior has been taken for granted in that if you think about 2 scenarios – you flip a coin and verify heads vs. someone flips a coin and reports heads, the latter probability should be less than or equal to yours. So it seems rational to discount reports from external agents, (unless you question your own perceptions!). Therefore, it is reasonable for an external agent to discount your reports as well, and your design takes that into account.

When these scenarios are modeled formally, we end up with 1 person games (experiments to learn) and 2 player games (experiments to “prove”).

The following papers by Kadane and Seidenfield, then Jayanta Ghosh go into detail from a Bayesian perspective, which might help answer your question.

Randomization in the Bayesian Perspective. (link)

The Role of Randomization in Bayesian Analysis (pdf)

1 Like

Those appear very informative at first glace. I’m going to read them carefully. The first thing that struck me is the date, 1990. None of these issues that the Nobel prize went out for seem to be anything new.

I read your initial link; I think the issue is that their RCT work was innovative within the development subtopic in economics, which had ordinarily been dominated by observational studies. So the Nobel is for the RCT work in the context of policy making, not the modeling of optimal experimental design, which you correctly point out goes back a very long time.

My interest in this topic is more foundational. Given my field will not be able to produce a credible RCT for various interventions and populations, what studies should a skeptic accept as evidence? When can proponents of an intervention argue that insistence on RCT is unjustified in a particular context?

I’m guided by this quote from the introduction to Paul Rosenbaum’s text Observational Studies

Scientific evidence is commonly and properly greeted with objections, skepticism, and doubt. Some objections come from those who simply do not like the conclusions, but setting aside such unscientific reactions, responsible scientists are responsibly skeptical…This skepticism is itself scrutinized. Skepticism must be justified and defended. One needs “grounds for doubt” in Wittgenstein’s phrase. Grounds for doubt are themselves challenged. Objections bring forth counter-objections and more evidence. As time passes arguments on one side or the other become strained, fewer scientists are willing to offer them … In this way, questions are settled.