Books I read in 2018

The best books I read in 2018 - generally released in other years - are below. Where I have reviewed, the link leads to that review. Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (2014) - Changed my mind, and gave me a framework for thinking about the problem that I didn’t have before. Annie Duke, Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts (2018) - While I have many small quibbles with the content, and it could easily have been a long-form article, I liked the overarching approach and framing.

Gary Klein on confirmation bias in heuristics and biases research, and explaining everything

Confirmation bias In Sources of Power: How People Make Decisions, Gary Klein writes: Kahneman, Slovic, and Tversky (1982) present a range of studies showing that decision makers use a variety of heuristics, simple procedures that usually produce an answer but are not foolproof. … The research strategy was not to demonstrate how poorly we make judgments but to use these findings to uncover the cognitive processes underlying judgments of likelihood.

In contrast to less-is-more claims, ignoring information is rarely, if ever optimal

From the abstract of an interesting paper Heuristics as Bayesian inference under extreme priors by Paula Parpart and colleagues: Simple heuristics are often regarded as tractable decision strategies because they ignore a great deal of information in the input data. One puzzle is why heuristics can outperform full-information models, such as linear regression, which make full use of the available information. These “less-is-more” effects, in which a relatively simpler model outperforms a more complex model, are prevalent throughout cognitive science, and are frequently argued to demonstrate an inherent advantage of simplifying computation or ignoring information.

My latest in Behavioral Scientist: Simple heuristics that make algorithms smart

My latest contribution at Behavioral Scientist is up. Here’s an excerpt: Modern discussions of whether humans will be replaced by algorithms typically frame the problem as a choice between humans on one hand or complex statistical and machine learning models on the other. For problems such as image recognition, this is probably the right frame. Yet much of the past success of algorithms relative to human judgment points us to a third option: the mechanical application of simple models and heuristics.

A problem in the world or a problem in the model

In reviewing Michael Lewis’s The Undoing Project, John Kay writes: Since Paul Samuelson’s Foundations of Economic Analysis, published in 1947, mainstream economics has focused on an axiomatic approach to rational behaviour. The overriding requirement is for consistency of choice: if A is chosen when B is available, B will never be selected when A is available. If choices are consistent in this sense, their outcomes can be described as the result of optimisation in the light of a well-defined preference ordering.

The Rhetoric of Irrationality

From the opening of Lola Lopes’s 1991 article The Rhetoric of Irrationality (pdf) on the heuristics and biases literature: Not long ago, Newsweek ran a feature article describing how researchers at a major midwestern business school are exploring the process of choice in hopes of helping business executives and business students improve their ‘often rudimentary decision-making skills’ … [T]he researchers have, in the author’s words, ‘sadly’ concluded that ‘most people’ are ‘woefully muddled information processors who stumble along ill-chosen shortcuts to reach bad conclusions’.

Genoeconomics and designer babies: The rise of the polygenic score

When genome-wide association studies (GWAS) were first used to study complex polygenic traits, the results were underwhelming. Few genes with any predictive power were found, and those that were typically explained only a fraction of the genetic effects that twin studies suggested were there. This led to divergent responses, ranging from continued resistance to the idea that genes affect anything, to a quiet confidence that once sample sizes became large enough those genetic effects would be found.

How happy is a paraplegic a year after losing the use of their legs?

From Dan Gilbert’s 2004 TED talk, now viewed over 16 million times: Let’s see how your experience simulators are working. Let’s just run a quick diagnostic before I proceed with the rest of the talk. Here’s two different futures that I invite you to contemplate. You can try to simulate them and tell me which one you think you might prefer. One of them is winning the lottery. This is about 314 million dollars.

How likely is "likely"?

From Andrew Mauboussin and Michael Mauboussin: In a famous example (at least, it’s famous if you’re into this kind of thing), in March 1951, the CIA’s Office of National Estimates published a document suggesting that a Soviet attack on Yugoslavia within the year was a “serious possibility.” Sherman Kent, a professor of history at Yale who was called to Washington, D.C. to co-run the Office of National Estimates, was puzzled about what, exactly, “serious possibility” meant.

Avoiding trite lists of biases and pictures of human brains on PowerPoint slides

From a book chapter by Greg Davies and Peter Brooks, Practical Challenges of Implementing Behavioral Finance: Reflections from the Field (quotes taken from a pre-print): Taken in isolation, the ideas and concepts that comprise the field of behavioral finance are of very little practical use. Indeed, many of the attempts to apply these ideas amount to little more than a trite list of biases and pictures of human brains on PowerPoint slides.