How happy is a paraplegic a year after losing the use of their legs?

From Dan Gilbert’s 2004 TED talk, now viewed over 16 million times:

Let’s see how your experience simulators are working. Let’s just run a quick diagnostic before I proceed with the rest of the talk. Here’s two different futures that I invite you to contemplate. You can try to simulate them and tell me which one you think you might prefer. One of them is winning the lottery. This is about 314 million dollars. And the other is becoming paraplegic.

Just give it a moment of thought. You probably don’t feel like you need a moment of thought.

Interestingly, there are data on these two groups of people, data on how happy they are. And this is exactly what you expected, isn’t it? But these aren’t the data. I made these up!

These are the data. You failed the pop quiz, and you’re hardly five minutes into the lecture. Because the fact is that a year after losing the use of their legs, and a year after winning the lotto, lottery winners and paraplegics are equally happy with their lives.

And here’s Dan Gilbert reflecting on this statement 10 years later:

The first mistake occurred when I misstated the facts about the 1978 study by Brickman, Coates and Janoff-Bulman on lottery winners and paraplegics.

At 2:54 I said, “… a year after losing the use of their legs, and a year after winning the lotto, lottery winners and paraplegics are equally happy with their lives.” In fact, the two groups were not equally happy: Although the lottery winners (M=4.00) were no happier than controls (M=3.82), both lottery winner and controls were slightly happier than paraplegics (M=2.96).

So why has this study become the poster child for the concept of hedonic adaptation? First, most of us would expect lottery winners to be much happier than controls, and they weren’t. Second, most of us would expect paraplegics to be wildly less happy than either controls or lottery winners, and in fact they were only slightly less happy (though it is admittedly difficult to interpret numerical differences on rating scales like the ones used in this study). As the authors of the paper noted, “In general, lottery winners rated winning the lottery as a highly positive event, and paraplegics rated their accident as a highly negative event, though neither outcome was rated as extremely as might have been expected.” Almost 40 years later, I suspect that most psychologists would agree that this study produced rather weak and inconclusive findings, but that the point it made about the unanticipated power of hedonic adaptation has now been confirmed by many more powerful and methodologically superior studies. You can read the original study here.

It’s great that he is able to step back and admit his mistakes. One thing that perplexes me, however, is that he purports to show the real data on a slide:

Gilbert 

As you can see, this runs on a scale reaching up to 70, with both measured at 50. The actual measure was on a 5-point scale. Where did these numbers come from? Did Gilbert simply make these data up?

If this were just a case of misstating the point of the study, I would feel much sympathy. As he states:

When I gave this talk in 2004, the idea that videos might someday be “posted on the internet” seemed rather remote. There was no Netflix or YouTube, and indeed, it would be two years before the first TED Talk was put online. So I thought I was speaking to a small group of people who’d come to a relatively unknown conference in Monterey, California, and had I realized that ten years later more than 8 million people would have heard what I said that day, I would have (a) rehearsed and (b) dressed better.

That’s a lie. I never dress better. But I would have rehearsed. Back then, TED talks were considerably less important events and therefore a lot more improvisational, so I just grabbed some PowerPoint slides from previous lectures, rearranged them on the airplane to California, and then took the stage and winged it. I had no idea that on that day I was delivering the most important lecture of my life.

But if that chart was made up, my sympathy somewhat fades away.

How likely is “likely”?

From Andrew Mauboussin and Michael Mauboussin:

In a famous example (at least, it’s famous if you’re into this kind of thing), in March 1951, the CIA’s Office of National Estimates published a document suggesting that a Soviet attack on Yugoslavia within the year was a “serious possibility.” Sherman Kent, a professor of history at Yale who was called to Washington, D.C. to co-run the Office of National Estimates, was puzzled about what, exactly, “serious possibility” meant. He interpreted it as meaning that the chance of attack was around 65%. But when he asked members of the Board of National Estimates what they thought, he heard figures from 20% to 80%. Such a wide range was clearly a problem, as the policy implications of those extremes were markedly different. Kent recognized that the solution was to use numbers, noting ruefully, “We did not use numbers…and it appeared that we were misusing the words.”

Not much has changed since then. Today people in the worlds of business, investing, and politics continue to use vague words to describe possible outcomes.

To examine this problem in more depth, team Mauboussin asked 1700 people to attach probabilities to a range of words or phrases. For instance, if a future event is likely to happen, what percentage of the time would you estimate it ends up happening? Or what if the future event has a real possibility of happening?

Unsurprisingly, the answers are all over the place. The HBR article has a nice chart of the distribution of responses, and you see more detailed results here. (You can also take the survey there too).

What is the range of answers for an event that is “likely”? The 90% probability range for “likely” – that is the range that 90% of the answers fell within (and 5% of the answers were above, and 5% below) was 55% to 90%. “Real possibility” had a probability range of between 20% and 80% – the phrase in near meaningless. Even “always” is ambiguous, with a probability range of 90% to 100%.

An interesting finding of the survey was that men and women differ in their interpretations. Women are more likely to take a phrase as indicating a higher probability.

So what does team Mauboussin suggest we should do? Use numbers. Pin down those subjective probabilities using objective benchmarks. Practice.

And to close with another piece of Sherman Kent wisdom:

Said R. Jack Smith:  Sherm, I don’t like what I see in our recent papers. A 2-to-1 chance of this; 50-50 odds on that. You are turning us into the biggest bookie shop in town.

Replied Kent:  R.J., I’d rather be a bookie than a [blank-blank] poet.

Avoiding trite lists of biases and pictures of human brains on PowerPoint slides

From a book chapter by Greg Davies and Peter Brooks, Practical Challenges of Implementing Behavioral Finance: Reflections from the Field (quotes taken from a pre-print):

Taken in isolation, the ideas and concepts that comprise the field of behavioral finance are of very little practical use. Indeed, many of the attempts to apply these ideas amount to little more than a trite list of biases and pictures of human brains on PowerPoint slides. Talking a good game in the arena of behavioral finance is easy, which often leads to the misperception that it is superficial. Yet, making behavioral finance work in practice is much more challenging: it requires integrating these ideas with working models, information technology (IT) systems, business processes, and organizational culture.

Substitute the word “behavioural finance” with “behavioural economics” and its kin, and the message reads the same.

On the “bias” bias:

Today, extremely long lists of biases are available, which do little to convey the underlying sophistication, complexity, and thoroughness of more than half a century of highly robust experimental and theoretical work. These lists provide no real framework for potential practitioners to deploy when approaching a tangible problem. And many of these biases appear to overlap or conflict with each other, which can make behavioral finance appear either very superficial or highly confused.

The easily accessible examples that academics have used to illustrate these biases to wide audiences have sometimes led to the impression that behavioral economics is an easy field to master. This misrepresentation leads to inevitable disappointment when categorizing biases proves not to be an easy panacea. A perception of the field as “just anecdotes and parlor games” reduces the willingness of the commercial world to put substantial investments of time and resource into building applications grounded on the underlying ideas. Building behavioral finance ideas into commercial applications requires both depth and breadth of understanding of the theory and, in many cases, large resource commitments.

On whether there is a grand unified theory:

A commonly expressed concern, at least in the mainstream press, is that there exists no grand unified theory of behavioral economics, and that the field is thus merely a chaotic collection of unconnected and often contradictory findings. For the purpose of practical implementation, the notion that this is, or needs to be, a clearly defined field should be eliminated, reducing the desire to erode it with arbitrary labels and definitions. Human behavior operates at multiple levels from the neurological to complex social interactions. Any quest for a grand unified theory to mirror that of physical sciences may well be entirely misguided, together with the notion that such a theory is necessary for the broad field to be useful. Much more effective is an approach of treating the full range of behavioral findings as a rich toolbox that can be applied to, and tested on, a range of practical concerns.

On the superficial application:

The first major challenge is that behavioral finance is not particularly effective if applied superficially. Yet, superficial attempts are commonplace. Some seek to do little more than offer a checklist of biases, hoping that informing people of poor decision-making can solve the problem. Instead, a central theme of decision science is the consistent finding that merely informing people of their adverse behavioral proclivities is very seldom effective in combating them.

Because behavioral finance is both topical and fascinating to many people, it attracts ‘hobbyists’ who can readily recite a number of biases, but who neither have the depth of knowledge of the field overall, nor a solid grasp of the theoretical underpinnings of the more technical aspects of the field. …

This chapter is not an attempt to erect barriers to entry amongst behavioral practitioners and claim that only those with advanced degrees in the field should be taken seriously. On the contrary, the effect of greater academic training can cause its beneficiaries to hold on too closely to narrow and technical interpretations of the field to make them effective practitioners. Indeed, some of the most effective practitioners do not have an extensive academic background in the field. However, they have invested considerable time and effort getting to know and deeply understand the breadth and depth of the field.

And on naive buyers:

Limited study of behavioral finance through reading the popular books on the topic may equip one to sound knowledgeable and appear convincing. However, as a relatively new field, the purchasers of behavioral expertise are seldom equipped to know the difference and may be unable to tell a superficially convincing approach from approaches that embody true understanding. This leaves the field open to consultants peddling ‘behavioral expertise’ but having in their toolkit little more than a list of biases that they apply sequentially and with little variation to each problem encountered. Warning flags should go up whenever the proposal rests heavily on catalogues of behavioral biases or contains a preponderance of pictures of brains.

Chris Voss’s Never Split the Difference: Negotiating as if your life depended on it

Summary: Interesting ideas on how to approach negotiation, but I don’t know how much weight to give them. How much expertise could be developed in hostage negotiations? Can that expertise be distilled into principles, or is much of it tacit knowledge?

VossChris Voss’s Never Split the Difference: Negotiating as if your life depended on it (written with Tahl Raz) is a distillation of Voss’s approach to negotiation, developed through 15 years negotiating hostage situations for the FBI. Voss was the FBI’s lead international kidnapping negotiator, and for the last decade he has run a consulting firm that guides organisations through negotiations.

I am not sure how I should rate the book. There are elements I like, elements that seem logical, and yet a sense that much is just storytelling. I don’t know enough of the negotiation literature to understand what other support there might be for Voss’s approach – and Voss generally doesn’t draw on the literature – so it is not clear what weight I should give to his arguments.

Voss’s central thread is that we should not approach negotiation as though it is a purely rational exercise. No matter how you frame the negotiation in advance, there is no escaping the humans that will be engaging in that negotiation.

This argument seems obvious, as in many negotiations you will be dealing with emotional people. Yet a flip through some of the classic negotiating texts, such as Getting to Yes, shows that the consideration of emotion is often shallow. Emotion is largely discussed as something to be overcome so that a mutually beneficial deal can be reached.

A deeper level to understanding the role emotion is to see how integral it is to the negotiating process. Emotion and decision-making cannot be disentangled.

In the opening chapter, Voss links this need to consider emotions to the work of Daniel Kahneman and Amos Tversky (unfortunately described as University of Chicago professors who discovered more than 150 cognitive biases). Voss draws on Kahneman’s distinction between the two modes of thought described in Thinking, Fast and Slow: the fast, instinctive and emotional System 1, and the slow, deliberative and logical System 2. If you go into a negotiation with all the tools to deal with System 2 without the tools to read, understand and manipulate System 1, you were trying to make an omelette without cracking an egg.

Despite being prominent in the opening, Kahneman and Tversky’s work is only briefly considered in other parts of the book, mainly in one chapter that includes examination of anchoring and loss aversion. By manipulating someone’s reference point and capitalising on their fear of loss, you can shift the terms of what they will agree to.

For instance, Voss suggests that you might initially anchor the other side’s expectations through an “accusation audit”, whereby you list every terrible thing the other side could say about you in advance. You then create a frame so that the agreement is to avoid loss. Putting those together, you might start out by saying that you have a horrible deal for them, but still want to bring it to them before you give it to somebody else. By taking the sting out of the low offer and framing acceptance of that offer as an opportunity to avoid loss, you might induce acceptance.

Voss also discusses the idea of setting a very high or low anchor early in negotiations, although he notes that this comes at a cost. It might be effective against the inexperienced, but you lose the opportunity of learning from the other side when they go first. If prepared, you can resist their anchor, and if you are in a low information environment, you might be pleasantly surprised.

Voss recognises the human desire for fairness in another important factor. While Voss draws on the academic literature to demonstrate that desire, his proposed approaches to fairness in negotiation are not put in the context of that literature. As a result, I don’t have much of a grip on whether his ideas – such as avoiding accusations of unfairness, and giving the other side permission to stop you at any time of they feel you are being unfair – are effective. It’s polite, sounds reasonable, but does it work?

The concept that gets the most attention in the book is tactical empathy. This involves active listening, with tools such as mirroring (repeating the last few words someone said to induce them to keep explaining), labelling (giving a name to their feelings) and summarising their position back to them. I am partial to these ideas. By listening, you can learn a lot. I have always found that simple repetition of concepts, whether through mirroring, labelling or summarising, are powerful tools to get people to open up and to understand their position.

Another thread to the book is the idea of saying no without saying no, generally through the use of calibrated questions. Calibrated questions are questions with no fixed answer, and that can’t be answered with a yes or no. They typically start with “how” or “what”, rather than “is” or “does”. They can be used to give the other side the illusion of control while at the same time pushing them to think about solving your problem. If the price is higher than you want to pay, you might say “how am I supposed to pay that?” Calibrated questions also have broader use through the negotiation to learn more from your counterpart.

Ideas such as this seem attractive, but I don’t know how much weight I should put on Voss’s arguments. This is largely because I don’t how much expertise you could develop in hostage negotiation, and the degree to which that expertise is tacit knowledge. Voss notes that his expertise is built from experience, not from textbooks, and that his approach is designed for the real world. Can a human build skills for this real world? Is there rapid feedback on decisions, with an opportunity to learn?

In one sense there is feedback, with the hostages released or not, and the terms of that release known. But each negotiation would involve a multitude of decisions and factors. Conversations might extend for days or weeks. How effectively can you isolate the cause of the outcome? How stable is that cause-effect relationship across different negotiations?

In a podcast episode with Sam Harris, Voss mentioned that he had been involved around 150 hostage negotiations around the world. That would seem a fair number to start to be able to identify patterns, particularly if you consider that through a negotiation there might be many smaller opportunities of feedback, such as extracting information. But as Voss’s stories through the book show, these negotiations span across many different countries and contexts. How many of those elements are common and stable enough for true expertise to develop? Most of his experience involved international kidnapping – a commodity business involving financial transactions. Can the lessons from these be applied elsewhere?

Voss (and the FBI more generally) would have had a broader range of examples to draw on, and Voss’s more recent experience in consulting on negotiation could provide further opportunities to develop expertise. But it’s not obvious how that experience is incorporated into expertise that in turn can be effectively distilled into a book.

Me on Rationally Speaking, plus some additional thoughts

My conversation with Julia Galef on Rationally Speaking is out, exploring territory on how behavioural economics and its applications could be better.

There are links to a lot of the academic articles we discuss on the Rationally Speaking site. We also talk about several of my own articles, including:

Please not another bias! An evolutionary take on behavioural economics (This article is my presentation script for a marketing conference. I’ve been meaning to rewrite it as an article for some time to remove the marketing flavour and to replace the evolutionary discussion with something more robust. Much of the evolutionary psychology experimental literature relies on priming, and I’m not confident the particular experiments I reference will replicate.)

Rationalizing the “Irrational”

When Everything Looks Like a Nail: Building Better “Behavioral Economics” Teams

The difference between knowing the name of something and knowing something

There were a couple of questions for which I could have given different (better) answers, so here are some additional thoughts.

An example of evolutionary biology “explaining” a bias: I gave an example of the availability heuristic, but one for which more work has been done explicitly on the evolutionary angle is loss aversion. Let me quote from an article by Owen Jones, who has worked directly on this:

To test the idea that a variety of departures from rational choice predictions might reflect evolved adaptations, I had the good fortune to team up with primatologist Sarah Brosnan.

The perspective from behavioral biology on the endowment effect is simple: in environments that lacked today’s novel features (such as reliable property rights, third-party enforcement mechanisms, and the like) it is inherently risky to give up what you have for something that might be slightly better. Nothing guarantees that your trading partner will perform. So in giving up one thing for another you might wind up with neither.

So the hypothesis is that natural selection would historically have favored a tendency to discount what you might acquire or trade for, compared to what you already have in hand, even if that tendency leads to irrational outcomes in the current (demonstrably different) environment. The basis of the hypothesis is a variation of the maxim that a bird in the hand has been, historically, worth two in the bush.

First, we predicted that if the endowment effect were in fact an example of Time-Shifted Rationality then the endowment effect would likely be observable in at least some other species. Here’s why, in a nutshell. … If the endowment effect tends to lead on average to behaviors that were adaptive when there are asymmetric risks of keeping versus exchanging, then this isn’t likely to be true only for humans. It should at a minimum be observable in some or all of our closest primate relatives, i.e. the other 4 of the 5 great apes: chimpanzees, orangutans, gorillas, and bonobos.

Second, we predicted that if the endowment effect were in fact an example of Time-Shifted Rationality, the prevalence of the endowment effect in other species is likely to vary across categories of items. This follows because selection pressures can, and very often do, narrowly tailor behavioral predispositions that vary as a function of the evolutionary salience (i.e., significance) of the circumstance. Put another way, evolved behavioral adaptations can be “facultative,” consisting of a hierarchical set of “if-then” predispositions, which lead to alternate behaviors in alternate circumstances. Because no animal evolved to treat all objects the same, there’s no reason to expect that they, or humans, would exhibit the endowment effect equally for all objects, or equally in all circumstances. Some classes of items are obviously more evolutionarily significant than others – simply because value is not distributed evenly across all the items a primate encounters.

Third (and as a logical consequence of prediction (2)), we predicted that the prevalence of the endowment effect will correlate – increasing or decreasing, respectively – with the increasing or decreasing evolutionary salience of the item in question. Evolutionary salience refers to the extent to which the item, under historical conditions, would contribute positively to the survival, thriving, and reproducing of the organism acquiring it.

To test these predictions, we conducted a series of experiments with chimpanzee subjects. No other extant theory generated this set of three predictions. And the results of our experiments corroborated all three.

Our results provided the first evidence of an endowment effect in another species. Specifically, and as predicted, chimpanzees exhibit an endowment effect consonant with many of the human studies that find the effect. As predicted, the prevalence of the effect varies considerably according to the class of item. And, as predicted most specifically, the prevalence was far greater (fourteen times greater, in fact) within a class of trading evolutionarily salient items – here, food items – for each other than it was when trading within a class of non- evolutionarily salient items – here toys. Put another way, our subjects were fourteen times more likely to hang onto their less-preferred evolutionarily salient item, when they could trade it for their more-preferred evolutionarily salient item, than they were to hang onto their less-preferred item with corresponding, but not evolutionarily salient, items.

On the role of signalling: Costly signalling theory is the idea that for a signal of quality to be honest, it should impose a cost on the signaller that someone without that quality would not be able to bear. For instance, peacocks have large unwieldy tails that consume a lot of resources, with only the highest quality males able to incur this cost.

To understand how signalling might affect our understanding of human behaviour, I tend to categorise the possible failures to understand their behaviour in the following three ways.

First, we can simply fail to understand the objective. If a person’s objective is status, and we try to understand their actions as attempts to maximise income, we might mistakenly characterise their decisions as errors.

Second, we might understand the proximate objective, but fail to realise that there is an underlying ultimate objective. Someone might care about, say, income, but if it is in the context of achieving another objective, such as getting enough income to make a specific purchase, we might similarly fail to properly assess the “rationality” of their decisions. For example, there might be a minimum threshold that leads us to take “risky” actions in relation to our income.

Signalling sometimes falls into this second basket, as the proximate objective is the signal for the ultimate objective. For instance, if we use education as a signal of our cognitive and social skills to get a job, viewing the objective as getting a good education misses the point.

Third, even if we understand the proximate and ultimate objectives, we might fail to understand the mechanism by which the objective is achieved. Signalling can lead to complicated mechanisms that are often overlooked.

To illustrate, even if we know that someone is only seeking further education to increase their employment prospects, you would expect different behaviour if education was a signal and not a source of skills for use on the job. If education is purely a signal, people may only care about getting the credential, not what they learn. If education serves a more direct purpose, we would expect students to invest much more in learning.

I discuss a couple of these points in my Behavioral Scientist article Rationalizing the Irrational”. Evolutionary biology is a great source of material on signalling, although as I have written about before, economists did have at least one crucial insight earlier.

Finally, I’ve plugged Rationally Speaking before, and here is a list of some of the episodes I enjoyed the most (there are transcripts if you prefer to read):

Tom Griffiths on how our biases might be signs of rationality

Daniel Lakens on p-hacking

Paul Bloom on empathy

Bryan Caplan on parenting

Phil Tetlock on forecasting

Tom Griffiths and Brian Christian on Algorithms to Live By

Don Moore on overconfidence

Jason Brennan on “Against Democracy”

Jessica Flanigan on self-medication

Alison Gopnik on parenting

Christopher Chabris on collective intelligence

I limited myself to eleven here – there are a lot of other great episodes worth listening to.

An evolutionary projection of global fertility and population: My new paper (with Lionel Page) in Evolution & Human Behavior

Forecasting fertility is a mug’s game. Here is a picture of fertility forecasts by the US Census Bureau through the baby boom and subsequent fertility drop (from Lee and Tuljapurkar’s Population Forecasting for Fiscal Planning: Issues and Innovations). The dark line is actual, the dotted line the various forecasts.

US Census forecasts

I am not sure that the science of fertility forecasting in developed countries has made substantial progress since any of those forecasts were made. But that doesn’t stop a lot of people from trying.

One of the most high profile forecasts of fertility and population comes from the United Nations, which publishes global population forecasts through to 2100. Individual country forecasts are currently developed using a Bayesian methodology, which are then aggregated to form a global picture. The development of this methodology led to a heavily cited 2014 paper titled “World population stabilization unlikely this century” (pdf) and the conclusion that there was only a 30% probability that global population growth would cease this century.

These projections contain an important fertility assumption. For countries that have undergone the demographic transition to low fertility, the assumption is that their fertility rate will oscillate around a long-term mean. While there has been some debate around whether this long-term mean would be the replacement rate or lower, the (almost theory-free) assumption of oscillation around a long-term level dominates the forecasts.

There is at least one theoretical basis for doubting this assumption. In a 2013 working paper (co-authored with Oliver Richards), we argued that as fertility was heritable, this would tend to increase fertility and population growth. Those with a preference for higher fertility would have more children, with their children in turn having a preference for more children. This high-fertility type would eventually come to dominate the population, leading to markedly higher population that forecast.

As I noted when the working paper was released, we were hardly the first to propose this idea. Fisher noted the power of higher fertility groups in The Genetical Theory of Natural Selection. I had seen Razib Khan, Robin Hanson and John Hawks mention the idea. Murphy and Wang examined the concept in a microsimulation. Many papers on the heritability of fertility hint at it. Rowthorn’s paper on fertility and religiosity also points in this direction. We simply added a touch of quantitative modelling to explore the speed of the change, and have now been followed by others with different approaches (such as this).

Shortly after I posted about the working paper, I received an email from Lionel Page suggesting that we should turn this idea into more detailed simulation of world population. Five years after Lionel’s email, that simulation has just been released in a paper published in Evolution & Human Behavior. Here is the abstract:

The forecasting of the future growth of world population is of critical importance to anticipate and address a wide range of global challenges. The United Nations produces forecasts of fertility and world population every two years. As part of these forecasts, they model fertility levels in post-demographic transition countries as tending toward a long-term mean, leading to forecasts of flat or declining population in these countries. We substitute this assumption of constant long-term fertility with a dynamic model, theoretically founded in evolutionary biology, with heritable fertility. Rather than stabilizing around a long-term level for post-demographic transition countries, fertility tends to increase as children from larger families represent a larger share of the population and partly share their parents’ trait of having more offspring. Our results suggest that world population will grow larger in the future than currently anticipated.

Our methodology is almost identical to the United Nations methodology, except we substitute the equation by which fertility converges to a long-term mean with the breeder’s equation, which captures the response to selection of a trait.

And here are a few charts showing the simulation results: grey is the base United Nations simulation, black the evolutionary simulations, the dashed lines the 90% confidence intervals. First, European total fertility rate (TFR) and population, which shifts from terminal decline to growth:

908TFR908pop

Next, North America, which increases its rate of growth:

905TFR905pop

Next, Asia:

935TFR935pop

And finally, the global result:

900TFR900pop

The punchline is that the probability of global population stabilisation this century becomes less than 5%. Europe and North America that are most effected within this century. Asia is less effected, but still shifts from a scenario of decline to one of growth, and due to its size has the largest effect on the global projections.

Having opened by saying that fertility forecasting is a mug’s game, should the same be said about these forecasts? The answer to that question is largely yes. Cultural and technological change, environmental shocks and the like will almost certainly lead to a different outcome to the one the United Nations or we have forecast. We effectively argue this in the section of the paper on cultural evolution (which was added following some helpful reviewer comments).

But to get lost in the specific numbers is to lose sight of the exercise. We are arguing that an important assumption underpinning the United Nations exercise should be reconsidered. We’ve given a rough idea of how far that assumption could shift the fertility and population outcomes, and they are of a magnitude that would see some parts of the world looking quite different by the end of the century. If we assume constant fertility despite this evolutionary dynamic, we risk a material downward bias in projecting future fertility and population.

As an aside, the freely available methodology and R packages that underpin the United Nations forecasts greatly facilitated our efforts. We spent a lot of time considering how to implement the simulations, but on discovering the openness of the United Nations approach, we found a great place to implement our tweaked approach. In that spirit, you can access our modified packages and the data used to generate them here at OSF.

If you can’t access the paper through the paywall and would like me to email you a copy, hit me up in the comments below. I’ll also put up a version of the working paper here shortly.

The Paradox of Trust

sugdenIn a chapter of Robert Sugden’s The Community of Advantage: A Behavioural Economist’s Defence of the Market, he makes some interesting arguments about how we should interpret the results of the trust game. (This is the last post

First, what is the trust game:

The ‘Trust Game’ was first investigated experimentally by Joyce Berg, John Dickhaut, and Kevin McCabe (1995). … In Berg et al.’ s game, two players (A and B) are in separate rooms and never know one another’s identity. Each player is given $10 in one-dollar bills as a ‘show up fee’. A puts any number of these bills, from zero to ten, in an envelope which will be sent to B; he keeps the rest of the money for himself. The experimenter supplements this transfer so that B receives three times what A chose to send. B then puts any number of the bills she has received into another envelope, which is returned to A; she keeps the rest of the money for herself. The game is played once only, and the experiment is set up so that no one (including the experimenter) can know what any other identifiable person chooses to do. The game is interesting to theorists of rational choice because it provides the two players with an opportunity for mutual gain, but if the players are rational and self-interested, and if each knows that this is true of the other, no money will be transferred. (It is rational for B to keep everything she is sent; knowing this, it is rational for A to send nothing.)

There is a sizeable body of empirical evidence that player A often does send money and B often returns money. How can this be explained? One option is to draw on the concept of reciprocity.

In this literature, it is a standard modelling strategy to follow Matthew Rabin (1993) in characterizing intentions as kind or unkind. … The greater the degree to which one player benefits the other by forgoing his own payoffs, the kinder he is. Rabin’s hypothesis is that individuals derive utility from their own payoffs, from being kind towards people who are being kind to them, and from being unkind towards people who are being unkind to them.

But if you think this hypothesis through, there is a problem, which Sugden calls the Paradox of Trust.

[I]t seems that any reasonable extension of Rabin’s theory will have the following implication for the Trust Game: It cannot be the case that A plays send, expecting B to play return with probability 1, and that B, knowing that A has played send, plays return. To see why not, suppose that A chooses send, believing that B will choose return with probability 1.

A has not faced any trade-off between his payoffs and B’s, and so has not had the opportunity to display kindness or unkindness.

Since Rabin often describes positive reciprocity as ‘rewarding’ kind behaviour (and describes negative reciprocity as ‘punishing’ unkind behaviour), the idea seems to be that B’s choice of return is her way of rewarding A for the goodness of send. But if A’s action was self-interested, it is not clear why it deserves reward.

It may seem paradoxical that, in a theory in which individuals are motivated by reciprocity, two individuals cannot have common knowledge that they will both participate in a practice of trust. Nevertheless, this conclusion reflects the fundamental logic of a modelling strategy in which pro-social motivations are represented as preferences that are acted on by individually rational agents. It is an essential feature of (send, return), understood as a practice of trust, that both players benefit from both players’ adherence to the practice. If A plays his part in the practice, expecting B to play hers, he must believe and intend that his action will lead to an outcome that will in fact benefit both of them. Thus, if pro-sociality is interpreted as kindness—as a willingness to forgo one’s own interests to benefit others—A’s choice of send cannot signal pro-social intentions, and so cannot induce reciprocal kindness from B. I will call this the Paradox of Trust.

Is there an alternative way of seeing this problem? Sugden turns to the idea of mutually beneficial exchange.

The escape route from the Paradox of Trust is to recognize that mutually beneficial cooperation between two individuals is not the same thing as the coincidence of two acts of kindness. When A chooses send in the Trust Game, his intention is not to be kind to B: it is to play his part in a mutually beneficial scheme of cooperation, defined by the joint action (send, return). … If A is completely confident that B will reciprocate, and if that confidence is in fact justified, A’s choice of send is in his own interests, while B’s choice of return is not in hers. Nevertheless, both players can understand their interaction as a mutually beneficial cooperative scheme in which each is playing his or her part.

This interpretation has implications for how we should view market exchange.

Theorists of social preferences sometimes comment on the fact that behaviour in market environments, unlike behaviour in Trust and Public Good Games, does not seem to reveal the preferences for equality, fairness and reciprocity that their models are designed to represent. The explanation usually offered is that people have social preferences in all economic interactions, but the rules of the market are such that individuals with such preferences have no way of bringing about the fair outcomes that they really desire.

Could it be that behaviour in markets expresses the same intentions for reciprocity as are expressed in Trust and Public Good Games, but that these intentions are misrepresented in theories of social preference?

Nudging and the problem of context dependent preferences

sugdenIn my recent post on Robert Sugden’s The Community of Advantage: A Behavioural Economist’s Defence of the Market, I noted a couple of papers in which Sugden and Cass Sunstein debated how to make people better off “as judged by themselves” if they have context dependent preferences.

Below is one point and counterpoint that I found useful.

In a reply to Sugden’s paper, Cass Sunstein writes:

2. Mary is automatically enrolled in a Bronze Health Care Plan – it is less expensive than Silver and Gold, but it is also less comprehensive in its coverage, and it has a higher deductible. Mary prefers Bronze and has no interest in switching. In a parallel world (a lot like ours, but not quite identical, Wolf 1990), Mary is automatically enrolled in a Gold Health Care Plan – it is more expensive than Silver and Bronze, but it is also more comprehensive in its coverage, and it has a lower deductible. Mary prefers Gold and has no interest in switching.

3. Thomas has a serious illness. The question is whether he should have an operation, which is accompanied with potential benefits and potential risks. Reading about the operation online, Thomas is not sure whether he should go ahead with it. Thomas’ doctor advises him to have the operation, emphasizing how much he has to lose if he does not. He decides to follow the advice. In a parallel world (a lot like ours, but not quite identical), Thomas’s doctor advises him not to have the operation, emphasizing how much he has to lose if he does. He decides to follow the advice.

In the latter two cases, Mary and Thomas appear to lack an antecedent preference; what they prefer is an artifact of the default rule (in the case of Mary) or the framing (in the case of Thomas). …

These are the situations on which I am now focusing: People lack an antecedent preference, and what they like is a product of the nudge. Their preference is constructed by it. After being nudged, they will be happy and possibly grateful. We have also seen that even if people have an antecedent preference, the nudge might change it, so that they will be happy and possibly grateful even if they did not want to be nudged in advance.

In all of these cases, application of the AJBT [as judged by themselves] criterion is less simple. Choice architects cannot contend that they are merely vindicating choosers’ ex ante preferences. If we look ex post, people do think that they are better off, and in that sense the criterion is met. For use of the AJBT criterion, the challenge is that however Mary and Thomas are nudged, they will agree that they are better off. In my view, there is no escaping at least some kind of welfarist analysis in choosing between the two worlds in the cases of Mary and Thomas. There is a large question about which nudge to choose in such cases (for relevant discussion, see Dolan 2014). Nonetheless, the AJBT criterion remains relevant in the sense that it constrains what choice architects can do, even if it does not specify a unique outcome (as it does in cases in which people have clear ex ante preferences and in which the nudge does not alter them).

Sugden responds:

In Sunstein’s example, Thomas’s preference between having and not having an operation varies according to whether his attention is directed towards the potential benefits of the operation or towards its potential risks. A choice architect can affect Thomas’s choice by choosing how to present given information about benefits and risks. The problem for the AJBT criterion is that Thomas’s judgement about what makes him better off is itself context-dependent, and so cannot be used to determine the context in which he should choose.

In response to the question of what the choice architect ought to do in such cases, Sunstein concludes that ‘there is no escaping at least some kind of welfarist analysis’—that is, an analysis that makes ‘direct inquiries into people’s welfare’. In his comment, Sunstein does not say much about how a person’s welfare is defined or assessed, but many of the arguments in Nudge imply that the method of enquiry is to try to reconstruct the (assumedly context-independent) latent preferences that fully informed choosers would reveal in the absence of psychologically induced error. Sunstein seems to endorse this methodology when he says: ‘It is psychologically fine to think that choosers have antecedent preferences, but that because of a lack of information or a behavioural bias, their choices will not satisfy them’. Here I disagree. In the first part of my paper, which summarises a fuller analysis presented by Infante et al. (2016), I argued that it is not psychologically fine to assume that human choices result from interactions between context-independent latent preferences and behavioural biases. I maintain that the concept of latent preference is psychologically ungrounded.

I have interpreted AJBT, as applied to category (3) cases, as referring to the judgements implicit in choosers’ latent preferences. In his comment, Sunstein offers a different interpretation—that the relevant judgements are implicit in choosers’ actual posterior preferences. Take the case of Thomas and the operation. We are told that, in whichever direction Thomas is nudged, he will be ‘happy’ with his decision, judging himself to be better off than if he had chosen differently. In other words, any nudge that causes Thomas to change his decision can be said to make him better off, as judged by himself. I think Sunstein is going astray here by thinking of nudges as causing changes in preference. Suppose the doctor directs Thomas’s attention towards the benefits of the operation and advises him to have it. Thomas accepts this advice. At the moment of choice, Thomas is thinking about the options in the frame provided by the doctor, and so he thinks he is making the right decision. But suppose that, shortly before he is wheeled into the operating theatre, he looks at some medical website that uses the opposite frame. If his preferences are context-dependent, he may now wish he had chosen differently. Sunstein is not entitled to assume that, after choosers have been nudged, their judgements become context-independent. If the AJBT criterion is to have bite—if, as Sunstein says, it is to ‘discipline the content of paternalistic interventions’—it must adjudicate between the judgements that the chooser makes in different contexts. That is why Thaler and Sunstein need the concept of latent preference—with all its problems.

Robert Sugden’s The Community of Advantage: A Behavioural Economist’s Defence of the Market

There are few books critiquing behavioural economics that I find compelling. David Levine’s Is Behavioral Economics Doomed? attacks too many straw men. Gilles Saint-Paul’s The Tyranny of Utility: Behavioral Science and the Rise of Paternalism is more an attack of the normative foundations of economics than of behavioural science. And in most of Gerd Gigerenzer’s books, while making a strong case that many of the so-called “biases” are better described as good decision-making under uncertainty, Gigerenzer often extends his defence of human decision making too far.

In The Community of Advantage: A Behavioural Economist’s Defence of the Market, Robert Sugden finds a nice balance in his critique. Sugden starts by taking the evidence of behavioural anomalies seriously, reflecting his four decades working in the field. His critique focuses on how the behavioural research has been interpreted and used as part of the “nudge” movement to develop recommendations for the “planner”, “benevolent autocrat” or “choice architect”.

Sugden’s critique has two main thrusts. The first relates to how behavioural economists have interpreted the experimental evidence that our decisions don’t conform with rational choice theory. In the preface, Sugden writes:

I have to say that I have been surprised by the readiness of behavioural economists to interpret contraventions of rational choice theory as evidence of decision-making error. In the pioneering era of the 1980s and 1990s, this was exactly the interpretation of anomalies that mainstream economists typically favoured, and that we behavioural economists disputed. As some of us used to say, it is as if decision-makers are held to be at fault for failing to behave as the received theory predicts, rather than that theory being faulted for failing to make correct predictions.

In particular, Sugden sees behavioural economists as having adopted an approach whereby they see people as having inner-preferences that conform with the rational choice model (“latent preferences”), contained within a “psychological shell”. This shell distorts our decisions through lack of attention, limited cognitive abilities and incomplete self control. As Sugden points out, this approach has almost no relationship with actual psychological processes, and it is questionable whether these latent preferences exist.

The second thrust of Sugden’s critique relates to how the behavioural findings have triggered a public policy response that is largely paternalistic. In the preface, he continues:

I have been less surprised, but still disappointed, by the willingness of behavioural economists to embrace paternalism. And I have felt increasingly uneasy that, in public discourse, ideas from behavioural welfare economics are appealing to a sensibility that is hostile to principles of economic freedom—principles that, for two and a half centuries, have been central to the liberal tradition of economics.

Here Sugden undertakes the rather large task of seeking to displace the dominant normative basis of economics – utilitarianism – with “contractarianism”.

I’ll now cover each of these two arguments in the depth they deserve.

The concept of latent preferences

Decades of behavioural research have presented a challenge to neoclassical economics. Many of its underpinning assumptions about human preferences and decision making simply do not hold. So how can we reconcile the two?

Sugden makes the case that behavioural economists typically approach this problem by thinking of humans as rational beings wrapped in a layer of irrationality. (He draws heavily on his work with Gerardo Infante and Guilhem Lecouteux in Preference purification and the inner rational agent: A critique of the conventional wisdom of behavioural welfare economics (working paper pdf) in making this case.) Sugden pulls apart a number of the seminal papers on nudging, including Thaler and Sunstein’s Libertarian Paternalism (pdf) (the precursor to Nudge) and Colin Camerer, Samuel Issacharoff, George Loewenstein, Ted O’Donaghue, and Matthew Rabin’s Regulation for Conservatives (pdf), in arguing that there is this common approach. For each of them, the underlying “latent preferences” are the benchmark that against which utility is measured, with decisions that do not meet this criteria attributed to error.

Given this, the role of the planner (or “choice architect” as Thaler and Sunstein rebranded the planner in Nudge) is to try to reconstruct a person’s latent preferences. These latent preferences would have been revealed if they had not been affected by limitations of attention, information, cognitive ability or self-control. Sugden calls this reconstruction of latent preferences “preference purification”.

One of Sugden’s central points concerns whether preference purification is possible. For instance, it is only possible if the latent preferences are context independent.

To illustrate the problem of context independence, Sugden asks us to consider Thaler and Sunstein’s famous cafeteria story. Imagine that the relevant prominence or ordering of food in a cafeteria affects people’s choices (and experimental evidence suggests that it does). The cafeteria director could place the fruit more prominently, with the cakes at the back, increasing purchases of fruit and “nudging” the customers to the healthy option.

Suppose the cake is at the front of the display. When the ordinary human “Joe” goes to the cafe, he selects the cake. If the fruit had been at the front, he would have selected the fruit. Has Joe made an error in his choice? We need to ask what his latent preference is. But suppose Joe is indifferent between cake and fruit. He is not misled by labelling or any false beliefs about the products or their effects on their health. He simply feels a desire to eat whatever is at the front of the display. What is the nature of the error?

To help answer this, imagine that SuperReasoner also goes to the cafe. SuperReasoner is just like Joe except that he “has the intelligence of Einstein, the memory of Deep Blue, and the self-control of Gandhi”. (Sugden borrows this combination of traits from Nudge). What happens when SuperReasoner encounters cake and fruit that vary in prominence? Since he is just like Joe, he is indifferent between the two. He also has the same feelings as Joe, so feels a desire to eat whatever is at the front. This is not a failing of intelligence, memory or self-control. There is no error. Rather, the latent preference itself is context dependent. But if latent preferences themselves are context dependent, how do you ever determine what a latent preference is? What is the right context?

When I first read this example, I was unclear how important it was. It was clear that Sugden had found a weakness in the latent preferences approach, but was this something practically important?

I think the answer to that question is yes, and it comes down to the disconnect between the latent preferences approach and the actual decision making processes of humans. Context independent latent preferences in many cases simply do not exist. They only come into existence in certain contexts. And whatever the psychological approach actually is, latent preferences in an inner shell isn’t it.

Even if there were an inner rational agent, advocates of the preference purification approach don’t attempt to understand or explain the decision making process of this inner agent. There is simply an assumption that there is some mode of latent reasoning that satisfies the economists’ principles of rationality, free from the imperfections created by the external psychological shell. (This is also a problem with rational choice theory. As Sugden writes “rational choice is not self-explanatory: if there are circumstances in which human beings behave in accordance with the theory of rational choice, that behaviour still requires a psychological explanation.”)

(As an aside that I won’t go further into today, Sugden and Sunstein continue this debate in a series of papers that are worth reading. See Sugden’s Do people really want to be nudged towards healthy lifestyles?, Sunstein’s response (pdf) and Sugdens rejoinder. Sugden’s rejoinder has another great example of the problems created by context dependent latent preferences that I’ll discuss in another post.)

Sugden does see that one possible defence of the latent preference approach is to define latent preferences as the preferences that this same person will endorse in independently definable circumstances. People will acknowledge these latent preferences even when a lack of self control (akrasia) leads them to act against their better judgment.

Sunstein and Thaler draw on this interpretation in Nudge in their New Year’s resolution test. How many people vow to drink or smoke more when making their resolutions?

As Sugden points out, this is a context dependent preference. People are using the cue of the New Year to make their resolution. In the same way, if they decide later to have an extra glass of wine in a restaurant, they are responding to that particular context.

The issue then becomes which of these are the true preference. I presume Sunstein and Thaler would take the New Year’s resolution. Sugden is less sure. As he writes:

[J]ust as the restaurant gives cues that point in the direction of drinking, so the traditions of New Year give cues that point in the direction of resolutions for future temperance. If an argument based on akrasia is to work, we need to be shown that in the restaurant, the individual acknowledges that her true preferences are the ones that led to her New Year’s resolution and not the ones she is now acting on. In many cases that fit the story of the resolution and the restaurant, the individual in the restaurant will be thinking that resolutions should not be followed too slavishly, that there is a place in life for spontaneity, and that having an extra glass of wine would be an appropriately spontaneous response to the circumstances. A person who thinks like this as she breaks a previous resolution is not acting contrary to what, at the moment of choice, she acknowledges as her true preferences.

So why do behavioural economists tend to see problems such as this as self-control problems? Sugden suggests this is because of their commitment to the model of the inner-rational agent. Any context dependent choice needs to be seen as an error. Sugden has a different view:

If one has no prior commitment to the idea of latent preferences, there is no reason to suppose that Jane has made any mistake at all. The question of how much she should drink may have no uniquely rational answer. Both when she was making New Year’s resolutions and when she was in the restaurant, she had to a strike a balance between considerations that pointed in favour of alcohol and considerations that pointed against it. The simplest explanation of her behaviour is that she struck one balance in the first case and a different balance in the second. This is not a self-control problem; it is a change of mind.

The contractarian perspective

While I have opened with Sugden’s critique of the nudging approach that emerged from his own field of behavioural economics, his agenda is The Community of Advantage is much broader – an alternative normative basis for economics that is consistent with the psychological evidence.

This normative basis is not new. At the beginning of the book Sugden sources it to John Stuart Mill – the book’s title comes from Mill’s description of the market as a “community of advantage”. Mill considered that economic life is, or should be, built on mutually beneficial cooperation. If people participate in relationships of mutual benefit, they will come to understand that they are cooperative partners, not rivals.

Sugden’s uses the term “contractarianism” to describe this normative foundation. Sugden is inspired by James Buchanan in this approach. Buchanan saw economics as not being about how the market should achieve certain means, but rather how the market is a forum by which people can enter into voluntary exchange.

The question for the economist thus becomes what institutional arrangements will maximise the opportunity for mutually beneficial cooperation, or more specifically, what institutional arrangements are in the interest of each individual to accept if everyone else accepts the same. As Sugden shows (through some rather technical proofs), markets tend to intermediate mutually beneficial transactions, so like neoclassical economics, contractarianism provides support for the use of markets. In this argument, he does not rely on the preferences of the agents being integrated, so he avoids the problems of the inadequacy of rational choice theory. In fact, opportunity is defined independently of people’s preferences, so it does not rely on preferences at all.

The contractarian approach does not result in a blunt call for no government action. Possibly the most stark example of this is Sugden’s suggestion that retirement savings might be mandated. Sugden is sceptical that savings shortages are driven by short-term desires to spend, and asks whether the large economic, political and personal uncertainties involved in saving for a retirement decades away are more important. Among other things, people may simply believe that their collective voting power might enable them to secure sufficient transfers from the working population whatever they do.

In this case, Sugden suggests the problem is a collective action problem. What is the credibility of a policy regime in which private savings play a major part if a large proportion of people simply won’t play ball? In a society where the imprudent have votes, some form of mandatory saving might be required to create the sustainable institutional structure to guarantee some minimum living standard.

I struggled through my first read of the book to understand exactly what a contractarian would think about nudging (I am no philosopher), but there was one passage that I felt gave me the closest glimpse:

A typical questioner will describe some case in which (as judged by the questioner) a mild but unchosen nudge would be very beneficial to its nudgees. Perhaps the nudgees are morbidly obese, and the nudge is a government policy that will make unhealthy fast food less readily available. The questioner asks me: What would you do in this case? To which my reply is: What do you mean, what would I do? What is the hypothetical scenario in which I am supposed to be capable of doing something about the diets of my morbidly obese fellow-citizens?

If the scenario is one in which Robert Sugden is in a roadside restaurant and a morbidly obese stranger is sitting at another table ordering a huge all-day breakfast as a mid-afternoon snack, the answer is that I would do nothing. I would think it was not my business as a diner in a restaurant to make gratuitous interventions into other diners’ decisions about what to eat. But of course, this is not the kind of scenario the questioner has in mind. What is really being asked is what I would do, were I a benevolent autocrat. My answer is that I am not a benevolent autocrat, nor the adviser to one. As a contractarian economist, I am not imagining myself in either of those roles. I am advising individuals about how to pursue their common interests, and paternalism has no place in such advice.

Another section of the mindset of the contractarian was also helpful:

Sunstein and Thaler devote a chapter of Nudge to the issue of retirement savings. The content of this chapter is summarized in the final paragraph:

Saving for retirement is something that Humans [as contrasted with ideally rational agents] find difficult. They have to solve a complicated mathematical problem to know how much to save, and then they have to exert a lot of willpower for a long time to execute this plan. This is an ideal domain for nudging. In an environment in which people have to make only one decision per lifetime, we should surely try harder to help them get it right. (Thaler and Sunstein, 2008: 117)

Look at the final sentence. Sunstein and Thaler are telling their readers that we should try harder to help them get their decisions right. But who are the ‘we’ and who are the ‘they’ here? What ‘we’ are supposed to be doing is designing and implementing choice architecture that nudges individuals to save more for retirement; so presumably ‘we’ refers to government ministers, legislators, regulators, human resource directors, and their respective assistants and advisers; ‘they’ are the individuals who should be saving. As an expert adviser on the design of occupational pension schemes, Thaler is certainly entitled to categorize himself as one of the ‘we’. But where do his readers belong? Very few of them will be in any position to design savings schemes, but just about all of them will face, or will have faced, the problem of saving for retirement. From a reader’s point of view, Sunstein and Thaler’s conclusion would be much more naturally expressed as: They should try harder to help us get it right.

Sunstein and Thaler are writing from the perspective of insiders to the public decision-making process: they are writing as if they were political or economic decision-makers with discretionary power, or the trusted advisors of such decision-makers. And they are inviting their readers to imagine that they are insiders too—that they are the people in control of the nudging, not the people who are being nudged.

I suggest that the benevolent autocrat model appeals to people who like to imagine themselves as insiders in this sense.

In contrast, the contractarian approach appeals to people who take an outsider’s view of politics, thinking of public decision-makers as agents and themselves as principals. The sort of person I have in mind does not think that he has been unjustly excluded from public decision-making or debate; he is more likely to say that he has (what for him are) more important things to do with his time. He does not claim to have special skills in economics or politics, and is willing to leave the day-to-day details of public decision-making to those who do—just as he is willing to leave the day-to-day maintenance of his central heating system to a trained technician. But when public decision-makers are dealing with his affairs, he expects them to act in his interests, as he perceives them. He does not expect them to set themselves up as his guardians.

Sugden states – and I agree – that he takes the psychological evidence more seriously than most nudge advocates. But his approach – which doesn’t rely on integrated preferences – does on first glance seem to have some weaknesses. How do people identify these mutually beneficial advantages? If we increase opportunity, do we end up with choice overload?

I covered some of Sugden’s views on choice overload in a previous post, whereby he stated that much concern for choice overload was condescension towards other people’s preferences. But he does take some of the issues with choice overload seriously. For instance, he notes that long menus of retirement or insurance plans lead to poorer choices through the lack of pre-existing preferences and lack of navigational aids (partly the result of public programs needing to be impartial in the way they present options).

Sugden also sees problems with “obfuscation” in the market, whereby firms deliberately price their products or present the pricing information in overly complex ways. They might bait, whereby only a small quantity of stock is available at the advertised price, or provide exploding offers, whereby a decision must be made in a certain timeframe.

Here the contractarian does not seek to close opportunities for exchange, but rather to provide a better institutional structure. This might involve requiring transparency in pricing, such as requiring pricing to be given for complementary bundles of goods (e.g. printers and print cartridges). However, they should not be required to be purchased together. Exploding offers designed to induce quick decisions might be tempered by cooling-off periods. Other product information such as calorie counts might be required on menus. Importantly, these measures are not then taken to have failed if someone continues to purchase the high calorie food.

The question about how effective people are at identifying and capitalising on opportunities for mutual benefit, outside of the choice overload issue, was less clearly addressed. Sugden reviews some of the experimental evidence relating to fairness, and suggests it points to a preference for mutually beneficial exchange (also a subject for another post). But this does not extend to the question of our effectiveness at seeing these opportunities for exchange.

A discussion

If you would prefer to get a flavour of the book in a different manner, below is a video discussion between Sugden, Henry Leveson-Gower and myself on some of the topics in the book.

Do nudges diminish autonomy?

Despite the fact that nudges, by definition, do not limit liberty, many people often have a feeling of discomfort about governments using nudges. I typically find it difficult to elicit from them what precisely is the problem, but often it comes down to the difference between freedom and autonomy.

In an essay Debate: To Nudge or Not to Nudge (pdf), Daniel Hausman and Bryan Welch do a good job of pulling this idea apart:

If one is concerned with autonomy as well as freedom, narrowly conceived, then there does seem to be something paternalistic, not merely beneficent, in designing policies so as to take advantage of people’s psychological foibles for their own benefit. There is an important difference between what an employer does when she sets up a voluntary retirement plan, in which employees can choose to participate, and what she does when, owing to her understanding of limits to her employees’ decision-making abilities, she devises a plan for increasing future employee contributions to retirement. Although setting up a voluntary retirement plan may be especially beneficial to employees because of psychological flaws that have prevented them from saving on their own, the employer is expanding their choice set, and the effect of the new plan on employee savings comes mainly as a result of the provision of this new alternative. The reason why nudges such as setting defaults seem, in contrast, to be paternalist, is that in addition to or apart from rational persuasion, they may “push” individuals to make one choice rather than another. Their freedom, in the sense of what alternatives can be chosen, is virtually unaffected, but when this “pushing” does not take the form of rational persuasion, their autonomy—the extent to which they have control over their own evaluations and deliberation—is diminished. Their actions reflect the tactics of the choice architect rather than exclusively their own evaluation of alternatives.

And not only might nudges diminish autonomy, they might be simply disrespectful.

One reason to be troubled, which Thaler and Sunstein to some extent acknowledge (p. 246/249), is that such nudges on the part of the government may be inconsistent with the respect toward citizens that a representative government ought to show. If a government is supposed to treat its citizens as agents who, within the limits that derive from the rights and interests of others, determine the direction of their own lives, then it should be reluctant to use means to influence them other than rational persuasion. Even if, as seems to us obviously the case, the decision-making abilities of citizens are flawed and might not be significantly diminished by concerted efforts to exploit these flaws, an organized effort to shape choices still appears to be a form of disrespectful social control.

But what if you believe that paternalistic policies are in some cases defensible? Are nudges the milder version?

Is paternalism that plays on flaws in human judgment and decision-making to shape people’s choices for their own benefit defensible? If one believes, as we do, that paternalistic policies (such as requiring the use of seat belts) that limit liberty are sometimes justified, then it might seem that milder nudges would a fortiori be unproblematic.

But there may be something more insidious about shaping choices than about open constraint. For example, suppose, for the purposes of argument, that subliminal messages were highly effective in influencing behavior. So the government might, for example, be able to increase the frequency with which people brush their teeth by requiring that the message, “Brush your teeth!” be flashed briefly during prime-time television programs. Influencing behavior in this way may be a greater threat to liberty, broadly conceived, than punishing drivers who do not wear seat belts, because it threatens people’s control over their own evaluations and deliberation and is so open to abuse. The unhappily coerced driver wearing her seat belt has chosen to do so, albeit from a limited choice set, unlike the hypothetical case of a person who brushes his teeth under the influence of a subliminal message. In contrast to Thaler and Sunstein [authors of Nudge], who maintain that “Libertarian paternalism is a relatively weak and nonintrusive type of paternalism,” to the extent that it lessens the control agents have over their own evaluations, shaping people’s choices for their own benefit seems to us to be alarmingly intrusive.

Hausman and Welch outline three distinctions that can help us think about whether nudges should be permissible (which I am somewhat sympathetic to).

First, in many cases, regardless of whether there is a nudge or not, people’s choices will be shaped by factors such as framing, a status quo bias, myopia and so forth. Although shaping still raises a flag because of the possibility of one agent controlling another, it arguably renders the action no less the agent’s own, when the agent would have been subject to similar foibles in the absence of nudges. When choice shaping is not avoidable, then it must be permissible.

Second, although informed by an understanding of human decision-making foibles, some nudges such as “cooling off periods” (p. 250/253) and “mandated choice” (pp. 86–7/88) merely counteract foibles in decision-making without in any way pushing individuals to choose one alternative rather than another. In this way, shaping apparently enhances rather than threatens an individual’s ability to choose rationally. …

Third, one should distinguish between cases in which shaping increases the extent to which a person’s decision-making is distorted by flaws in deliberation, and cases in which decision-making would be at least as distorted without any intentionally designed choice architecture. In some circumstances, such as (hypothetical) subliminal advertising, the foibles that make people care less about brushing their teeth are less of a threat to their ability to choose well for themselves than the nudging. In other cases, such as Carolyn’s, the choices of some of the students passing through the cafeteria line would have been affected by the location of different dishes, regardless of how the food is displayed.

There remains an important difference between choices that are intentionally shaped and choices that are not. Even when unshaped choices would have been just as strongly influenced by deliberative flaws, calculated shaping of choices still imposes the will of one agent on another.

One funny line about all this, however, is whether it is actually possible to choose “rationally”. Hausman and Welch see this point:

When attempting to persuade people rationally, we may be kidding ourselves. Our efforts to persuade may succeed because of the softness of our smile or our aura of authority rather than the soundness of our argument, but a huge difference in aim and attitude remains. Even if purely rational persuasion were completely impossible—that is, if rational persuasion in fact always involved some shaping of choices as well—there would be an important difference between attempting to persuade by means of facts and valid arguments and attempting to take advantage of loss aversion or inattention to get someone to make a choice that they do not judge to be best. Like actions that get people to choose alternatives by means of force, threats, or false information, exploitation of imperfections in human judgment and decision-making aims to substitute the nudger’s judgment of what should be done for the nudgee’s own judgment.