The limits of behavioural science: coronavirus edition

Author

Jason Collins

Published

April 7, 2020

Most articles on how behavioural science (or “behavioural economics”) can explain “X” are rubbish. “How behavioural economics explains Donald Trump’s election” or the equivalent would have been “How behavioural economics doomed Donald Trump” if he had failed to be elected. It’s after-the-fact storytelling of no scientific substance.

Through the last six weeks I have been collecting examples in the media of behavioural science applied to the coronavirus pandemic. There’s plenty of the usual junk.

As it turns out, Stuart Ritchie has also been on the case and written an article at UnHerd, Don’t trust the psychologists on coronavirus, saving me the trouble of writing my own. I would have chosen a different title, but follow the link and read the whole article. Below are some highlights.

First, Ritchie on Cass Sunstein:

Further psychological insights were provided by Cass Sunstein, co-author of the best-selling book Nudge, which used lessons from behavioural economics (essentially psychology by another name) that could inform attempts to change people’s behaviour. In an article for Bloomberg Opinion on 28 February (by which point there were over 83,000 confirmed coronavirus cases worldwide), Sunstein wrote that anxiety regarding the coronavirus pandemic was mainly due to something called “probability neglect”.

Because the disease is both novel and potentially fatal, Sunstein reasoned, we suffer from “excessive fear” and neglect the fact that our probability of getting it is low. “Unless the disease is contained in the near future,” he continued, “it will induce much more fear, and much more in the way of economic and social dislocation, than is warranted by the actual risk”.

The opening paragraph of Sunstein’s article was somewhat bizarre:

At this stage, no one can specify the magnitude of the threat from the coronavirus. But one thing is clear: A lot of people are more scared than they have any reason to be. They have an exaggerated sense of their own personal risk.

I know this shooting fish in the barrel, but how can you claim that people have an exaggerated sense of their own personal risk when no one can specify the magnitude of the threat?

(As an aside, turns out I have joined the masses of people blocked by Cass Sunstein on Twitter. Given my tweets are almost solely broadcasts of my blog posts, and my criticism of Sunstein within those posts is rather mild - and unlikely read by Sunstein - my working hypothesis is that he has blocked everyone Nassim Taleb follows. Recalling the first sentence of his book #Republic: “In a well-functioning democracy, people do not live in echo chamber or information cocoons”. I’ll leave that for now…)

Ritchie also points out that Gerd Gigerenzer makes the same error as Sunstein:

On 12 March, the day after Italy had announced its 827th death from the virus, the eminent psychologist Gerd Gigerenzer published a piece in Project Syndicate entitled “Why What Does Not Kill Us Makes Us Panic”. It was, to say the least, confused: it opened with an acknowledgement that we don’t know how bad this epidemic could be, but immediately went on to make the case that we’d likely overreact, and failed to consider any opposing arguments.

Gigerenzer normally does an admirable job of defending human intuition against critiques from the outside, but he has always questioned our “risk literacy”.

Stories based on behavioural science aren’t just landing in the media. They are forming part of advice to government:

The Behavioural Insights Team, a consulting company nicknamed the “Nudge Unit”, has been brought in to assist the UK’s response. At first it seemed their focus was on how to encourage handwashing, but there appears to have been mission creep.

For example, the team’s head, David Halpern, was interviewed as a “government coronavirus science advisor” about broad policies of “cocooning” older people. He was also quoted in support of the idea — which might yet seem grievously misguided in hindsight — that social-distancing measures should only be brought in gradually, to avoid people becoming fatigued. After he was reportedly “bollocked” by No.10 a fortnight ago for introducing the unfortunate phrase “herd immunity” into the national conversation, Halpern hasn’t (to my knowledge) been heard from again in public.

In defence of Halpern, this story is light on details, and there is a lot of interesting research that could inform this argument - as Vaughan Bell points out. That said, backing herd immunity on an assumption of fatigue is quite a leap of faith.

Why are behavioural scientists struggling? One is generalisability of the scientific literature:

To back up his points about “probability neglect”, Sunstein had referred to a 2001 paper in the journal Psychological Science. It reported three experiments; Sunstein focused on the third one, which included 156 participants, all of whom were undergraduate students reasoning about how much they’d pay to avoid an imaginary electric shock. It’s not a criticism of the scientists to say that this experiment is only tenuously relevant to a global pandemic.

Indeed, alongside talk of the “replication crisis” there’s been discussion of a “generalisability crisis”, with renewed realisation that results from lab experiments don’t necessary generalise to other contexts. A global pandemic of a completely novel virus is, by its very definition, a context never encountered before. So how can we be sure that the results of behavioural science experiments — even those that are based on bigger or more representative samples than 156 undergrads — are relevant to our current situation?

The answer is that we can’t. Exploring the human capacity for bias and irrationality can make for quirky, thought-provoking articles and books that make readers feel smarter (and can build towards a tentative scientific understanding of how the mind works). But when a truly dangerous disease comes along, relying on small-scale lab experiments and behavioural-economic studies results in dreadful misfires like the articles we encountered above.

Although there are many behavioural phenomena that certainly seem relevant to today’s news — bias, sunk costs, the tragedy of the commons — it’s not at all clear how these concepts would be practically applied to do what needs to be done right now: slowing the spread of the disease.

On Ritchie’s closing, I agree:

As intriguing as many psychological studies are, the vast majority of the insights we’ve gained from our research are simply not ready for primetime — especially in the case of a worldwide emergency where millions of lives are at stake. Much of the useful advice behavioural scientists can give isn’t really based on “science” to any important degree, and is intuitive and obvious.

Where they try to be counter-intuitive — for instance, arguing that people are wrong to find a global pandemic frightening — they simply end up embarrassing themselves, or worse, endangering people by having them make fewer pandemic preparations. This isn’t to say that psychology isn’t useful when it stays in its own lane: it’ll be important to ensure that as many people as possible have access to psychotherapy for the mental-health effects of the pandemic, for instance. But that’s a secondary effect of the virus: my argument here is that psychology can give little reliable counsel about our immediate reaction to the pandemic.

Psychologists should know their limits, and avoid over-stretching results from their small-scale studies to new, dissimilar situations. Decision-makers should, before using psychology research as the basis for policy, know just how weak and contentious so much of it is. And everyone else should stay at home, wash their hands — and beware psychologists bearing advice.

So could behavioural scientists be useful in this pandemic? They could help develop and test hypotheses to increase handwashing. They could help design communications about the need to stay at home. They have insights relevant to productivity while remote working. But they should be wary of telling stories about the accuracy of risk perception or how people will behave in the long term. Most of those claims are little more than storytelling.