Michael Mauboussin’s Think Twice: Harnessing the Power of Counterintuition

Author

Jason Collins

Published

August 1, 2018

Michael Mauboussin’s Think Twice: Harnessing the Power of Counterintuition is a multi-disciplinary book on how to improve your decision making. Framed around eight common decision-making mistakes, Mauboussin draws on disciplines including psychology, complexity theory and statistics.

Given the scope of the book, it does not reach great depth for most of its subject areas. But the interdisciplinary nature of the book means that most people are likely to find something new. I gained pointers to a lot of interesting reading, plus some new ways of thinking about familiar material. Below are a few interesting parts.

One early chapter contrasts the inside and outside views when making a judgement or prediction, a perspective I have often found helpful. The inside view uses the specific information about the problem at hand. The outside view looks at whether there are similar situations - a reference class - that can provide a statistical basis for the judgement. The simplest statistical basis is the “base rate” for that event - the probability of it generally occurring. The outside view, even a simple base rate, is typically a better indicator of the outcome than an estimate derived from the inside view.

Mauboussin points out that ignorance of the outside view is not the sole obstacle to its use. People will often ignore base rate information even when it is right in front of them. Mauboussin discusses an experiment by Freymuth and Ronan (pdf) where the experimental participants selected treatment for a fictitious disease. When the participants were able to choose a treatment with a 90% success rate that was paired with a positive anecdote, they chose it 90% of the time (choosing a control treatment with 50% efficacy the remaining 10% of the time). But when paired with a negative anecdote, only 39% chose the 90% efficacy treatment. Similarly, a treatment with 30% efficacy paired with a negative anecdote was chosen only 7% of the time, but this increased to 78% when it was paired with a positive anecdote. The stories drowned out the base rate information.

To elicit an outside view, Mauboussin suggests the simple trick of pretending you are predicting for someone else. Think about how the event will turn out for others. This will abstract you from the distracting inside view information and bring you closer to the more reliable outside view.

Mauboussin is at his most interesting, and differs from most standard examinations of decision making, when he considers decision making in complex systems (which happens to be the environment of many of our decisions).

One of his themes is it is nearly impossible to manage a complex system. Understanding any individual part may be of limited use in understanding the whole, and interfering with that part may have many unintended consequences. The century of bungling in Yellowstone National Park (via Alston Chase’s book Playing God in Yellowstone provides an example. In an increasingly connected world, more of our decisions are going to be in these types of systems.

One barrier to understanding a complex system is that the agents in an apparently intelligent system may not be that intelligent themselves. Mauboussin quotes biologist Deborah Gordon:

If you watch an ant try to accomplish something, you’ll be impressed by how inept it is. Ants aren’t smart, ant colonies are.

Complex systems often perform well at a system level despite the dumb agents. No single ant understands what the colony is doing, yet the colony does well.

Mauboussin turns this point into a critique of behavioural finance, suggesting it is a mistake to look at individuals rather than the market:

Regrettably, this mistake also shows up in behavioral finance, a field that considers the role of psychology in economic decision making. Behavioral finance enthusiasts believe that since individuals are irrational—counter to classical economic theory—and markets are made up of individuals, then markets must be irrational. This is like saying, “We have studied ants and can show that they are bumbling and inept. Therefore, we can reason that ant colonies are bumbling and inept.” But that conclusion doesn’t hold if more is different—and it is. Market irrationality does not follow from individual irrationality. You and I both might be irrationally overconfident, for example, but if you are an overconfident buyer and I am an overconfident seller, our biases may cancel out. In dealing with systems, the collective behavior matters more. You must carefully consider the unit of analysis to make a proper decision.

Mauboussin’s discussion of the often misunderstood concept of reversion (regression) to the mean is also useful. Here are some snippets:

“Mediocrity tends to prevail in the conduct of competitive business,” wrote Horace Secrist, an economist at Northwestern University, in his 1933 book, The Triumph of Mediocrity in Business. With that stroke of the pen, Secrist became a lasting example of the second mistake associated with reversion to the mean—a misinterpretation of what the data says. Secrist’s book is truly impressive. Its four hundred-plus pages show mean-reversion in series after series in an apparent affirmation of the tendency toward mediocrity.

In contrast to Secrist’s suggestion, there is no tendency for all companies to migrate toward the average or for the variance to shrink. Indeed, a different but equally valid presentation of the data shows a “movement away from mediocrity and [toward] increasing variation.” A more accurate view of the data is that over time, luck reshuffles the same companies and places them in different spots on the distribution. Naturally, companies that had enjoyed extreme good or bad luck will likely revert to the mean, but the overall system looks very similar through time. …

A counterintuitive implication of mean reversion is that you get the same result whether you run the data forward or backward. So the parents of tall children tend to be tall, but not as tall as their children. Companies with high returns today had high returns in the past, but not as high as the present. …

Here’s how to think about it. Say results are part persistent skill and part transitory luck. Extreme results in any given period, reflecting really good or bad luck, will tend to be less extreme either before or after that period as the contribution of luck is less significant. …

On this last point, a simple test of whether your activity involves skill is whether you can lose on purpose. For example, try to build a stock portfolio that will do worse than the benchmark.

Mauboussin links reversion of the mean to the “halo effect” (I recommend reading Phil Rosenzweig’s book of that name). The halo effect is the tendency of impressions from one area to influence impressions of another. In business, if people see a company with good profits, they will tend to assess the CEO’s management style, communications, organisational structure, strategic direction as all being positive.

When the company’s performance later reverts to the mean, people then interpret all of these things as going bad, when it is quite possible nothing has changed. The result is that great results tend to be followed by glowing stories in the media followed by the fall:

Tom Arnold, John Earl, and David North, finance professors at the University of Richmond, reviewed the cover stories that Business-Week, Forbes, and Fortune had published over a period of twenty years. They categorized the articles about companies from most bullish to most bearish. Their analysis revealed that in the two years before the cover stories were published, the stocks of the companies featured in the bullish articles had generated abnormal positive returns of more than 42 percentage points, while companies in the bearish articles underperformed by nearly 35 percentage points, consistent with what you would expect. But for the two years following the articles, the stocks of the companies that the magazines criticized outperformed the companies they praised by a margin of nearly three to one.

And to close, Mauboussin provides a great example of bureaucratic kludge preventing the use of a checklist in medical treatment:

Toward the end of 2007, a federal agency called the Office for Human Research Protections charged that the Michigan program violated federal regulations. Its baffling rationale was that the checklist represented an alteration in medical care similar to an experimental drug and should continue only with federal monitoring and the explicit written approval of the patient. While the agency eventually allowed the work to continue, concerns about federal regulations needlessly delayed the program’s progress elsewhere in the United States. Bureaucratic inertia triumphed over a better approach.