The illusion of the illusion of control

Author

Jason Collins

Published

November 21, 2016

In the spirit of my recent post on overconfidence, the illusion of control is another “bias” where imperfect information might be a better explanation for what is occurring.

The illusion of control is a common finding in psychology that people believe they can control things that they cannot. People would prefer to pick their lottery numbers than have them randomly allocated - being willing to even pay for the privilege. In laboratory games, people often report having control over outcomes that were randomly generated.

This effect was labelled by Ellen Langer as the illusion of control (for an interesting read about Langer’s other work, see here). The decision making advice that naturally flows out of this - and you will find in plenty of books building on the illusion of control literature - is that we need to recognise that we can control less than we think. Luck plays a larger role than we believe.

But when you ask about people’s control of random events, which is the typical experimental setup in this literature, you can only get errors in one direction - the belief that they have more control than they actually do. It is not possible to believe you have less than no control.

So what do people believe in situations where they do have some control?

In Left Brain, Right Stuff, Phil Rosenzweig reports on research (pdf) by Francesca Gino, Zachariah Sharek and Don Moore in which people have varying degrees of control over whether clicking a mouse would change the colour of the screen. For those that had no or little control (clicking the mouse worked 0% or 15% of the time), the participants tended to believe they had more control than they did - an illusion of control.

But when it came to those who had high control (clicking the mouse worked 85% of the time), they believed they had less control than they did. Rather than having an illusion of control, they failed to recognise the degree of control that they had. The one point where there was accurate calibration was when there was 100% control.

The net finding of this and other experiments is that we don’t systematically have an illusion on control. Rather, we have imperfect information about our level of control. When low, we tend to overestimate. When high (but not perfect), we tend to underestimate.

That the illusion of control was previously seen to be largely acting in one direction was due to experimental design. When people have no control and can only err in one way, that is naturally what will be found. Gino and friends term this problem as the illusion of the illusion of control.

So when it comes to decision making advice, we need to be aware of the context. If someone is picking stocks or something of that nature, the illusion of control is not helpful. But in their day-to-day life where they have influence over many outcomes, underestimating control could be a serious error.

Should we be warning against underestimating control? If we were to err consistently in one direction, it is not clear to me that having an illusion of control is of greater concern. Maybe we should err on the side of believing we can get things done.

*As an aside, there is a failed replication (pdf) of one of Langer’s 1975 experiments from the paper for which the illusion is named.