The three faces of overconfidence

Author

Jason Collins

Published

August 29, 2018

I have complained before about people being somewhat quick to label poor decisions as being due to “overconfidence”. For one, overconfidence has several distinct forms. It is a mistake to treat each as the same. Further, these forms vary in their pervasiveness.

The last time I made this complaint I drew on an article by Don Moore and Paul Healy, “The Trouble with Overconfidence” (pdf). A more recent article by Don Moore and Derek Schatz (pdf) provides some further colour on this point (HT: Julia Galef). It’s worth pulling out a few excerpts.

So what are these distinct forms? Overestimation, overplacement and overprecision. (It’s also useful to disambiguate overoptimism, which I’ll touch on at the end of this post.)

Overestimation is thinking that you’re better than you are. Donald Trump’s claim to be worth $10 billion (White, 2016) represented an overestimate relative to a more credible estimate of $4.5 billion by Forbes magazine (Peterson-Withorn, 2016). A second measure of overconfidence is overplacement: the exaggerated belief that you are better than others. When Trump claimed to have achieved the largest electoral victory since Ronald Reagan (Cummings, 2017), he was overplacing himself relative to other presidents. A third form of overconfidence is overprecision: being too sure you know the truth. Trump displays overprecision when he claims certainty about views which are contradicted by reality. For example, Trump claimed that thousands of Arab Americans in New Jersey publicly celebrated the fall of the World Trade Center on September 11th, 2001, without evidence supporting the certainty of his assertion (Fox News, 2015).

When people are diagnosing overconfidence, they can conflate the three. Pointing out that 90% of people believe they are better than average drivers (overplacement) is not evidence that a CEO was overconfident in their decision to acquire a competitor (possibly overestimation).

Overestimation

People tend to overestimate their performance on hard tasks. But when easy, they tend to underestimate.

In contrast to the widespread perception that the psychological research is rife with evidence of overestimation (Sharot, 2011), the evidence is in fact thin and inconsistent. Most notably, it is easy to find reversals in which people underestimate their performance, how good the future will be, or their chances of success (Moore & Small, 2008). When a task is easy, research finds that people tend to underestimate performance (Clark & Friesen, 2009). If you ask people to estimate their chances of surviving a bout of influenza, they will radically underestimate this high probability (Slovic, Fischhoff, & Lichtenstein, 1984). If you ask smokers their chances of avoiding lung cancer, they will radically underestimate this high probability (Viscusi, 1990).

The powerful influence of task difficulty (or the commonness of success) on over- and underestimations of performance has long been known as the hard-easy effect (Lichtenstein & Fischhoff, 1977). People tend to overestimate their performance on hard tasks and underestimate it on easy tasks. Any attempt to explain the evidence on overestimation must contend with the powerful effect of task difficulty.

Overplacement

In a reverse of the pattern for overestimation, people tend to overplace on easy tasks, but underplace on harder ones.

The evidence for “better-than-average” beliefs is so voluminous that it has led a number of researchers to conclude that overplacement is nearly universal (Beer & Hughes, 2010; Chamorro- Premuzic, 2013; Dunning, 2005; Sharot, 2011; Taylor, 1989). However, closer examination of this evidence suggests it suffers from a few troubling limitations (Harris & Hahn, 2011; Moore, 2007). Most of the studies measuring better-than-average beliefs use vague response scales that make it difficult to compare beliefs with reality. The most common measure asks university students to rate themselves relative to the average student of the same sex on a 7-point scale running from “Much worse than average” to “Much better than average.” Researchers are tempted to conclude that respondents are biased if more than half claim to be above average. But this conclusion is unwarranted (Benoît & Dubra, 2011). After all, in a skewed distribution the majority will be above average. Over 99% of the population has more legs than average.

Within the small set of studies not vulnerable to these critiques, the prevalence of overplacement shrinks. Indeed, underplacement is rife. People think they are less likely than others to win difficult competitions (Moore & Kim, 2003). When the teacher decides to make the exam harder for everyone, students expect their grades to be worse than others’ even when it is common knowledge that the exam will be graded on a forced curve (Windschitl, Kruger, & Simms, 2003). People believe they are worse jugglers than others, that they are less likely than others to win the lottery, and less likely than others to live past 100 (Kruger, 1999; Kruger & Burrus, 2004; Moore, Oesch, & Zietsma, 2007; Moore & Small, 2008). These underplacement results are striking, not only because they vitiate claims of universal overplacement, but also because they seem to contradict the hard-easy effect in overestimation, which finds that people most overestimate their performance on difficult tasks.

Moore and Healy offer an explanation for the different effects of task difficulty on overestimation and overplacement - myopia. I wrote about that in the earlier post.

Overprecision

Overprecision is pervasive but poorly understood.

A better approach to the study of overprecision asks people to specify a confidence interval around their estimates, such as a confidence interval that is wide enough that there is a 90% chance the right answer is inside it and only a 10% chance the right answer is outside it (Alpert & Raiffa, 1982). Results routinely find that hit rates inside 90% confidence intervals are below 50%, implying that people set their ranges too precisely—acting as if they are inappropriately confident their beliefs are accurate (Moore, Tenney, & Haran, 2016). This effect even holds across levels of expertise (Atir, Rosenzweig, & Dunning, 2015; McKenzie, Liersch, & Yaniv, 2008). However, one legitimate critique of this approach is that ordinary people are unfamiliar with confidence intervals (Juslin, Winman, & Olsson, 2000). That is not how we express confidence in our everyday lives, so maybe unfamiliarity contributes to errors.

Overprecision is the most pervasive but least understood form of overconfidence. Unfortunately, researchers use just a few paradigms to study it, and they rely on self-reports of beliefs using questions people are rarely called on to answer in daily life.

(Although not covered in Moore and Schatz’s paper, Gigerenzer also offers a critique that I’ll discuss in a forthcoming post.)

Overoptimism

Moore and Healy don’t touch on overoptimism directly in their paper, but in an interview with Julia Galef on the Rationally Speaking podcast, Moore touches on this point:

Julia: Before we conclude this disambiguation portion of the podcast I want to ask about optimism, which I am using to mean thinking that some project of yours has a greater chance of success than you’re justified in thinking it does. How does that fit into that three‐way taxonomy?

Don: It is an excellent question, and optimism has been studied a great deal. Perhaps the most famous scholars of optimism are Charles Carver and Mike Shier who have a scale that assesses the personality trait of optimism. Their usage of the term is actually not that far from the colloquial usage of the term, where to be optimistic is just to believe that good things are going to happen. Optimism is distinctively about a forecast for the future, and whether you think good things or bad things are going to happen to you.

Interestingly, this trait of optimism seems very weakly related to actual specific measures of overconfidence. When I ask Mike Shier why his optimistic personality trait didn’t correlate with any of my measures of overconfidence he said, “Oh, I wouldn’t expect it to.”

Julia: I would expect it to!

Don: Yeah. My [reaction] actually was, “Well, what the heck does it mean, if it doesn’t correlate with any specific beliefs?”

I think it’s hard to reconcile those in any sort of coherent or rational framework of beliefs. But I have since had to concede that there is a real psychological phenomenology, wherein you can have this free floating, positive expectation that doesn’t commit you to any specific delusional beliefs.