Me on Rationally Speaking, plus some additional thoughts
My conversation with Julia Galef on Rationally Speaking is out, exploring territory on how behavioural economics and its applications could be better.
There are links to a lot of the academic articles we discuss on the Rationally Speakingsite. We also talk about several of my own articles, including:
Please not another bias! An evolutionary take on behavioural economics (This article is my presentation script for a marketing conference. I’ve been meaning to rewrite it as an article for some time to remove the marketing flavour and to replace the evolutionary discussion with something more robust. Much of the evolutionary psychology experimental literature relies on priming, and I’m not confident the particular experiments I reference will replicate.)
Rationalizing the “Irrational”
When Everything Looks Like a Nail: Building Better “Behavioral Economics” Teams
The difference between knowing the name of something and knowing something
There were a couple of questions for which I could have given different (better) answers, so here are some additional thoughts.
An example of evolutionary biology “explaining” a bias: I gave an example of the availability heuristic, but one for which more work has been done explicitly on the evolutionary angle is loss aversion. Let me quote from an article by Owen Jones, who has worked directly on this:
To test the idea that a variety of departures from rational choice predictions might reflect evolved adaptations, I had the good fortune to team up with primatologist Sarah Brosnan.
…
The perspective from behavioral biology on the endowment effect is simple: in environments that lacked today’s novel features (such as reliable property rights, third-party enforcement mechanisms, and the like) it is inherently risky to give up what you have for something that might be slightly better. Nothing guarantees that your trading partner will perform. So in giving up one thing for another you might wind up with neither.
So the hypothesis is that natural selection would historically have favored a tendency to discount what you might acquire or trade for, compared to what you already have in hand, even if that tendency leads to irrational outcomes in the current (demonstrably different) environment. The basis of the hypothesis is a variation of the maxim that a bird in the hand has been, historically, worth two in the bush.
First, we predicted that if the endowment effect were in fact an example of Time-Shifted Rationality then the endowment effect would likely be observable in at least some other species. Here’s why, in a nutshell. … If the endowment effect tends to lead on average to behaviors that were adaptive when there are asymmetric risks of keeping versus exchanging, then this isn’t likely to be true only for humans. It should at a minimum be observable in some or all of our closest primate relatives, i.e. the other 4 of the 5 great apes: chimpanzees, orangutans, gorillas, and bonobos.
Second, we predicted that if the endowment effect were in fact an example of Time-Shifted Rationality, the prevalence of the endowment effect in other species is likely to vary across categories of items. This follows because selection pressures can, and very often do, narrowly tailor behavioral predispositions that vary as a function of the evolutionary salience (i.e., significance) of the circumstance. Put another way, evolved behavioral adaptations can be “facultative,” consisting of a hierarchical set of “if-then” predispositions, which lead to alternate behaviors in alternate circumstances. Because no animal evolved to treat all objects the same, there’s no reason to expect that they, or humans, would exhibit the endowment effect equally for all objects, or equally in all circumstances. Some classes of items are obviously more evolutionarily significant than others – simply because value is not distributed evenly across all the items a primate encounters.
Third (and as a logical consequence of prediction (2)), we predicted that the prevalence of the endowment effect will correlate – increasing or decreasing, respectively – with the increasing or decreasing evolutionary salience of the item in question. Evolutionary salience refers to the extent to which the item, under historical conditions, would contribute positively to the survival, thriving, and reproducing of the organism acquiring it.
To test these predictions, we conducted a series of experiments with chimpanzee subjects. No other extant theory generated this set of three predictions. And the results of our experiments corroborated all three.
…
Our results provided the first evidence of an endowment effect in another species. Specifically, and as predicted, chimpanzees exhibit an endowment effect consonant with many of the human studies that find the effect. As predicted, the prevalence of the effect varies considerably according to the class of item. And, as predicted most specifically, the prevalence was far greater (fourteen times greater, in fact) within a class of trading evolutionarily salient items – here, food items – for each other than it was when trading within a class of non- evolutionarily salient items – here toys. Put another way, our subjects were fourteen times more likely to hang onto their less-preferred evolutionarily salient item, when they could trade it for their more-preferred evolutionarily salient item, than they were to hang onto their less-preferred item with corresponding, but not evolutionarily salient, items.
On the role of signalling: Costly signalling theory is the idea that for a signal of quality to be honest, it should impose a cost on the signaller that someone without that quality would not be able to bear. For instance, peacocks have large unwieldy tails that consume a lot of resources, with only the highest quality males able to incur this cost.
To understand how signalling might affect our understanding of human behaviour, I tend to categorise the possible failures to understand their behaviour in the following three ways.
First, we can simply fail to understand the objective. If a person’s objective is status, and we try to understand their actions as attempts to maximise income, we might mistakenly characterise their decisions as errors.
Second, we might understand the proximate objective, but fail to realise that there is an underlying ultimate objective. Someone might care about, say, income, but if it is in the context of achieving another objective, such as getting enough income to make a specific purchase, we might similarly fail to properly assess the “rationality” of their decisions. For example, there might be a minimum threshold that leads us to take “risky” actions in relation to our income.
Signalling sometimes falls into this second basket, as the proximate objective is the signal for the ultimate objective. For instance, if we use education as a signal of our cognitive and social skills to get a job, viewing the objective as getting a good education misses the point.
Third, even if we understand the proximate and ultimate objectives, we might fail to understand the mechanism by which the objective is achieved. Signalling can lead to complicated mechanisms that are often overlooked.
To illustrate, even if we know that someone is only seeking further education to increase their employment prospects, you would expect different behaviour if education was a signal and not a source of skills for use on the job. If education is purely a signal, people may only care about getting the credential, not what they learn. If education serves a more direct purpose, we would expect students to invest much more in learning.
I discuss a couple of these points in my Behavioral Scientist article Rationalizing the”Irrational”. Evolutionary biology is a great source of material on signalling, although as I have written about before, economists did haveat least onecrucial insight earlier.
Finally, I’ve plugged Rationally Speaking before, and here is a list of some of the episodes I enjoyed the most (there are transcripts if you prefer to read):
Tom Griffiths on how our biases might be signs of rationality
Daniel Lakens on p-hacking
Paul Bloom on empathy
Bryan Caplan on parenting
Phil Tetlock on forecasting
Tom Griffiths and Brian Christian on Algorithms to Live By
Don Moore on overconfidence
Jason Brennan on “Against Democracy”
Jessica Flanigan on self-medication
Alison Gopnik on parenting
Christopher Chabris on collective intelligence
I limited myself to eleven here - there are a lot of other great episodes worth listening to.