Paper: Objecting to experiments that compare two unobjectionable policies or treatments

I know it’s PNAS :grinning:, but very interesting paper on the phenomenon where people are happy to implement interventions A or B, but not to run a randomized study to see if one is better than the other.

Randomized experiments have enormous potential to improve human welfare in many domains, including healthcare, education, finance, and public policy. However, such “A/B tests” are often criticized on ethical grounds even as similar, untested interventions are implemented without objection. We find robust evidence across 16 studies of 5,873 participants from three diverse populations spanning nine domains—from healthcare to autonomous vehicle design to poverty reduction—that people frequently rate A/B tests designed to establish the comparative effectiveness of two policies or treatments as inappropriate even when universally implementing either A or B, untested, is seen as appropriate. This “A/B effect” is as strong among those with higher educational attainment and science literacy and among relevant professionals. It persists even when there is no reason to prefer A to B and even when recipients are treated unequally and randomly in all conditions (A, B, and A/B). Several remaining explanations for the effect—a belief that consent is required to impose a policy on half of a population but not on the entire population; an aversion to controlled but not to uncontrolled experiments; and a proxy form of the illusion of knowledge (according to which randomized evaluations are unnecessary because experts already do or should know “what works”)—appear to contribute to the effect, but none dominates or fully accounts for it. We conclude that rigorously evaluating policies or treatments via pragmatic randomized trials may provoke greater objection than simply implementing those same policies or treatments untested.


There is one circumstance when this might be logical behaviour.
Suppose you are comparing the conventional treatment (A) that everyone has been doing for donkey’s years with a new one (B). You don’t know which one is better and suspect B may be an improvement.
However, you have got a very large sample of instances of people using A and so you are fairly confident that it doesn’t have any unexpectedly horrible side-effects. For B, it’s not just that you don’t know the mean effect, but you also have a small prior sample to judge on so you don’t have a good idea of the range.
I was thinking that it’s a bit like Tripadvisor ratings. You may regularly go to a hotel where you are moderately satisfied, and which has an average rating of 4. But someone tells you to go to the new one that’s just opened which has 5 ratings with an average of 4.1. Maybe you prefer to stick with the tried and tested because you are more confident that you know the range of experiences you will have with it. (This is disregarding possibility of fake reviews etc).
But if this is what drives people’s behaviour, it would be interesting, as it would be evidence against Kahneman’s ‘law of small numbers’,which maintains that people place too much reliance on small samples.


This doesn’t seem like the same thing. In your proposed scenario, people would object to the universal implementation of B, wouldn’t they? Plus both A and B are supposed to be untested, where A here is a conventional treatment (tested?)