Me on Rationally Speaking, plus some additional thoughts

My conversation with Julia Galef on Rationally Speaking is out, exploring territory on how behavioural economics and its applications could be better.

There are links to a lot of the academic articles we discuss on the Rationally Speaking site. We also talk about several of my own articles, including:

Please not another bias! An evolutionary take on behavioural economics (This article is my presentation script for a marketing conference. I’ve been meaning to rewrite it as an article for some time to remove the marketing flavour and to replace the evolutionary discussion with something more robust. Much of the evolutionary psychology experimental literature relies on priming, and I’m not confident the particular experiments I reference will replicate.)

Rationalizing the “Irrational”

When Everything Looks Like a Nail: Building Better “Behavioral Economics” Teams

The difference between knowing the name of something and knowing something

There were a couple of questions for which I could have given different (better) answers, so here are some additional thoughts.

An example of evolutionary biology “explaining” a bias: I gave an example of the availability heuristic, but one for which more work has been done explicitly on the evolutionary angle is loss aversion. Let me quote from an article by Owen Jones, who has worked directly on this:

To test the idea that a variety of departures from rational choice predictions might reflect evolved adaptations, I had the good fortune to team up with primatologist Sarah Brosnan.

The perspective from behavioral biology on the endowment effect is simple: in environments that lacked today’s novel features (such as reliable property rights, third-party enforcement mechanisms, and the like) it is inherently risky to give up what you have for something that might be slightly better. Nothing guarantees that your trading partner will perform. So in giving up one thing for another you might wind up with neither.

So the hypothesis is that natural selection would historically have favored a tendency to discount what you might acquire or trade for, compared to what you already have in hand, even if that tendency leads to irrational outcomes in the current (demonstrably different) environment. The basis of the hypothesis is a variation of the maxim that a bird in the hand has been, historically, worth two in the bush.

First, we predicted that if the endowment effect were in fact an example of Time-Shifted Rationality then the endowment effect would likely be observable in at least some other species. Here’s why, in a nutshell. … If the endowment effect tends to lead on average to behaviors that were adaptive when there are asymmetric risks of keeping versus exchanging, then this isn’t likely to be true only for humans. It should at a minimum be observable in some or all of our closest primate relatives, i.e. the other 4 of the 5 great apes: chimpanzees, orangutans, gorillas, and bonobos.

Second, we predicted that if the endowment effect were in fact an example of Time-Shifted Rationality, the prevalence of the endowment effect in other species is likely to vary across categories of items. This follows because selection pressures can, and very often do, narrowly tailor behavioral predispositions that vary as a function of the evolutionary salience (i.e., significance) of the circumstance. Put another way, evolved behavioral adaptations can be “facultative,” consisting of a hierarchical set of “if-then” predispositions, which lead to alternate behaviors in alternate circumstances. Because no animal evolved to treat all objects the same, there’s no reason to expect that they, or humans, would exhibit the endowment effect equally for all objects, or equally in all circumstances. Some classes of items are obviously more evolutionarily significant than others – simply because value is not distributed evenly across all the items a primate encounters.

Third (and as a logical consequence of prediction (2)), we predicted that the prevalence of the endowment effect will correlate – increasing or decreasing, respectively – with the increasing or decreasing evolutionary salience of the item in question. Evolutionary salience refers to the extent to which the item, under historical conditions, would contribute positively to the survival, thriving, and reproducing of the organism acquiring it.

To test these predictions, we conducted a series of experiments with chimpanzee subjects. No other extant theory generated this set of three predictions. And the results of our experiments corroborated all three.

Our results provided the first evidence of an endowment effect in another species. Specifically, and as predicted, chimpanzees exhibit an endowment effect consonant with many of the human studies that find the effect. As predicted, the prevalence of the effect varies considerably according to the class of item. And, as predicted most specifically, the prevalence was far greater (fourteen times greater, in fact) within a class of trading evolutionarily salient items – here, food items – for each other than it was when trading within a class of non- evolutionarily salient items – here toys. Put another way, our subjects were fourteen times more likely to hang onto their less-preferred evolutionarily salient item, when they could trade it for their more-preferred evolutionarily salient item, than they were to hang onto their less-preferred item with corresponding, but not evolutionarily salient, items.

On the role of signalling: Costly signalling theory is the idea that for a signal of quality to be honest, it should impose a cost on the signaller that someone without that quality would not be able to bear. For instance, peacocks have large unwieldy tails that consume a lot of resources, with only the highest quality males able to incur this cost.

To understand how signalling might affect our understanding of human behaviour, I tend to categorise the possible failures to understand their behaviour in the following three ways.

First, we can simply fail to understand the objective. If a person’s objective is status, and we try to understand their actions as attempts to maximise income, we might mistakenly characterise their decisions as errors.

Second, we might understand the proximate objective, but fail to realise that there is an underlying ultimate objective. Someone might care about, say, income, but if it is in the context of achieving another objective, such as getting enough income to make a specific purchase, we might similarly fail to properly assess the “rationality” of their decisions. For example, there might be a minimum threshold that leads us to take “risky” actions in relation to our income.

Signalling sometimes falls into this second basket, as the proximate objective is the signal for the ultimate objective. For instance, if we use education as a signal of our cognitive and social skills to get a job, viewing the objective as getting a good education misses the point.

Third, even if we understand the proximate and ultimate objectives, we might fail to understand the mechanism by which the objective is achieved. Signalling can lead to complicated mechanisms that are often overlooked.

To illustrate, even if we know that someone is only seeking further education to increase their employment prospects, you would expect different behaviour if education was a signal and not a source of skills for use on the job. If education is purely a signal, people may only care about getting the credential, not what they learn. If education serves a more direct purpose, we would expect students to invest much more in learning.

I discuss a couple of these points in my Behavioral Scientist article Rationalizing the Irrational”. Evolutionary biology is a great source of material on signalling, although as I have written about before, economists did have at least one crucial insight earlier.

Finally, I’ve plugged Rationally Speaking before, and here is a list of some of the episodes I enjoyed the most (there are transcripts if you prefer to read):

Tom Griffiths on how our biases might be signs of rationality

Daniel Lakens on p-hacking

Paul Bloom on empathy

Bryan Caplan on parenting

Phil Tetlock on forecasting

Tom Griffiths and Brian Christian on Algorithms to Live By

Don Moore on overconfidence

Jason Brennan on “Against Democracy”

Jessica Flanigan on self-medication

Alison Gopnik on parenting

Christopher Chabris on collective intelligence

I limited myself to eleven here – there are a lot of other great episodes worth listening to.

4 thoughts on “Me on Rationally Speaking, plus some additional thoughts

  1. First, thank you for the erudite discussion of the current state of behavioral decision making/economics. I have a question and a comment concerning something that occurred to me during the discussion. Going from my undergraduate economics courses (BA in Finance) both classical and neoclassical economics assumed that each and every human subject is a rational decision maker set on maximizing their wealth. Does this implicitly or explicitly mean that every human has the same rationality paradigm and that only their preferences vary from individual to individual? What I’m poorly trying to get at is whether there has been any attempt to account for the general personality characteristics of individuals in society as seen through
    a profile like the 5 factors test, Meyer’s Briggs or something else when looking at rationality and deviations from it. It seems to me that assuming a standard rational or even irrational (behavioral economics, what have you) human is a gross over-simplification. Wouldn’t starting out with a variety of rationalities maybe the 16 types from the Meyers Briggs (I understand that Meyer’s Briggs is by no means the last word in personality profiling) be a better place to start for?

    1. Human heterogeneity is under-explored in economics/behavioural economics – not just from the perspective of personality, but also broader preferences, sex differences, risk tolerances, IQ and so on. There’s some research in the area, often from a complexity/agent-based model perspective, but it’s certainly not at the centre of the discipline.

      As an aside, on personality I’d prefer to stick with the Big 5. Here’s an article on that:

  2. It would have been a bit stronger in your more recent post on “Trite Lists of Biases…” to have made a reference to this example of confirming the loss aversion heuristic. Altho it’s still not quite an example of what to do “when approaching a tangible problem.”

    It seems similar to why I do actually like the Meyers-Briggs (xNTP) over the Big 5 — where I forget my %. Even tho I’m on the E/I border, the other NTP choices are strongly me. And tho I haven’t taken the Harry Potter quiz, there is the reality of choices: A, B, C, D. (Or however many).
    69%, 44%, 43% … and 99% “compatible” (whatever that means in this context)
    For most people, and most choices, going directly to D is good enough.

    (Author is: 99th percentile for extroversion, the 58th percentile for agreeableness, the 29th percentile for conscientiousness, the 43rd percentile for neuroticism and the 99th percentile for openness…)
    99-E, 58-A, 29-C, 43-N, 99-O. With enough data and experience, this might be a better description than ENTP (I like the book Please Understand Me, from 30+ years ago) in many cases, tho I haven’t seen any personality behavior prediction studies which show a superiority of the Big 5.

    And that’s the key to “science” — prediction. How humans, or chimps, will actually behave, among some limited choice set. My own guess is that there’s more data on the M-B personality types and how they act, with a more robust level of prediction that is testable, than on most of the Big 5 ranges.

    Jordan Peterson, like most psychologists today, talks mostly about Big 5. And, in particular, how women usually have more agreeableness then men — and how agreeable folk get paid less (men and women). Do bitchy people, women and men, get paid more? Probably. That might not be the most flattering way to sell the Big 5 more, but seems likely to be effective; perhaps call it “assertiveness training”.

    1. Also, the 19 page transcript was excellent.

      “It’s quite often the case where the problems they’re trying to deal with are effectively, incentive problems. So people are, say, making poor decisions in a business — and yet the real reason they’re making these poor decisions is basically because they’re being paid to make those decisions. They’re selling products to customers that they shouldn’t be selling to, but why are they doing that? Because they’re paid to sell more products. ”

      I’m sure you’ve heard of the Two Things of Economics:
      1) There’s no Free Lunch;
      2) Incentives matter.

      I’m now thinking that our society is drifting towards destruction thru too much reward / social payment for personally irresponsible decisions. In particular promiscuity among poor people, but also the TBTF bailouts of the Big Banks (and their rich, rich owners). When other people’s money pays for your mistakes, your incentive to be careful / responsible / “rational” goes down.

      Similarly, I’d like more gov’t programs that help married working class parents folk do much better — but they don’t “need” it as much as similar low income parents who aren’t married. One of the not quite discussed positives about most “morally neutral” market capitalism is that the smart, hardworking folk get more rewards, with a big variation of luck.

Comments welcome

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s