Opposing biases
From the preface of one print of Philip Tetlock’s Expert Political Judgement (hat tip to Robert Wiblin who quoted this passagein the introduction to an 80,000 hours podcast episode):
The experts surest of their big-picture grasp of the deep drivers of history, the Isaiah Berlin–style “hedgehogs,” performed worse than their more diffident colleagues, or “foxes,” who stuck closer to the data at hand and saw merit in clashing schools of thought. That differential was particularly pronounced for long-range forecasts inside experts’ domains of expertise.
…
Hedgehogs were not always the worst forecasters. Tempting though it is to mock their belief-system defenses for their often too-bold forecasts—like “off-on-timing” (the outcome I predicted hasn’t happened yet, but it will) or the close-call counterfactual (the outcome I predicted would have happened but for a fluky exogenous shock)—some of these defenses proved quite defensible. And, though less opinionated, foxes were not always the best forecasters. Some were so open to alternative scenarios (in chapter 7) that their probability estimates of exclusive and exhaustive sets of possible futures summed to well over 1.0. Good judgment requires balancing opposing biases. Over-confidence and belief perseverance may be the more common errors in human judgment but we set the stage for over-correction if we focus solely on these errors and ignore the mirror image mistakes, of under-confidence and excessive volatility.
I can see why this idea of opposing biases makes correction of “biases” difficult.
But before we get to the correction of biases, this concept of opposing biases points at a major difficulty with behavioural analyses of decision making. When you have, say, both loss aversion and overconfidence in your bag of explanations for poor decision making, you can explain almost anything after the fact. The gamble turned out poorly? Overconfidence. Didn’t take the gamble? Loss aversion.
Recently I’ve heard a lot of people talking of action bias. There is also a status quo bias. Again, a pair of biases with which we can explain anything.