Not long ago, Newsweek ran a feature article describing how researchers at a major midwestern business school are exploring the process of choice in hopes of helping business executives and business students improve their ‘often rudimentary decision-making skills’
[T]he researchers have, in the author’s words, ‘sadly’ concluded that ‘most people’ are ‘woefully muddled information processors who stumble along ill-chosen shortcuts to reach bad conclusions’. Poor ‘saps’ and ‘suckers’ that we are, a list of our typical decision flaws would be so lengthy as to ‘demoralize’ Solomon.
This is a powerful message, sweeping in its generality and heavy in its social and political implications. It is also a strange message, for it concerns something that we might suppose could not be meaningfully studied in the laboratory, that being the fundamental adequacy or inadequacy of people’s capacity to choose and plan wisely in everyday life. Nonetheless, the message did originate in the laboratory, in studies that have no greater claim to relevance than hundreds of others that are published yearly in scholarly journals. My goal of this article is to trace how this message of irrationality has been selected out of the literature and how it has been changed and amplified in passing through the logical and expository layers that exist between experimental conception and popularization.
Below are some of the more interesting passages. First:
Prior to 1970 or so, most researchers in judgment and decision-making believed that people are pretty good decision-makers. In fact, the most frequently cited summary paper of that era was titled ‘Man as an intuitive statistician’ (Peterson & Beach, 1967). Since then, however, opinion has taken a decided turn for the worse, though the decline was not in any sense demanded by experimental results. Subjects did not suddenly become any less adept at experimental tasks nor did experimentalists begin to grade their performance against a tougher standard. Instead, researchers began selectively to emphasize some results at the expense of others.
The Science article [Kahneman and Tversky’s 1974 article (pdf)] is the primary conduit through which the laboratory results made their way our of psychology and into other branches of the social sciences. … About 20 percent of the citations were in sources outside psychology. Of these, all used the citation to support the unqualified claim that people are poor decision-makers.
Acceptance of this sort is not the norm for psychological research. Scholars from other fields in the social sciences such as sociology, political science, law, economics, business and anthropology look with suspicion on the tightly controlled experimental tasks that psychologists study in the laboratories, particularly when the studies are carried out using student volunteers. In the case of the biases and heuristics literature, however, the issue of generalizability is seldom raised and it is rarely so much as mentioned that the cited conclusions are based on laboratory research. Human incompetence is presented as a fact, like gravity.
If you think of it, this is a great trick, for the studies in question have managed to shed their experimental details without sacrificing scientific authority. Somehow the message of irrationality has been sprung free of its factual supports, allowing it to be seen entire, unobstructed by the hopeful assumptions and tedious methodologies that brace up all laboratory research.
One interesting thread concerns the purpose of the experiments and the contrasting conclusions drawn from them. For this discussion, Lopes looks at six of the experiments in four of Kahneman and Tversky papers published between 1971 and 1973, plus a summary article in Science from 1974. One example involved this question:
Consider the letter R. Is R more likely to appear in the first position of a word or the third position of a word?
This problem involves the availability heuristic, the tendency to estimate the probability of an event by the ease with which instances of the event can be remembered or constructed in the imagination. Under the availability hypothesis, people will see how many words they can generate with R in the first or third position. It is easier to think of words with R in the first position than the third, leading them to conclude – in error – that R is more common in the first.
[T]he question is posed so that there are only two possible results. One of these will occur if the subject reasons in accord with probability theory, and the other, if the subject reasons heuristically. …
By this logic, the implications of Figure 1 [a summary of the results] are clear: subjects reason heuristically and not according to probability theory. That is the result, signed, sealed and delivered, courtesy of strong inference. But the main contribution of the research is not this result since few would have supposed that naive people know much about combinations or variances of binomial proportions or how often R appears in the third position of words. Instead, the research commands attention and respect because the various problems function as thought experiments, strengthening our grasp of the task domain by revealing critical psychological variables that do not show up in the normative analysis. …
There is, however, another way to construe this set of studies and that is by considering the predictions of the two processing modes at a higher level of abstraction. If we think about performance in terms of correctness, we see that in every case the probability mode predicts correct answers and the heuristic mode predicts errors. … [T]he sheer weight of all the wrong answers tend to deform the basic conclusion, bending it away from an evaluatively neutral description of the process and toward something more like ‘people use heuristics to judge probabilities and they are wrong’, or even ‘people make mistakes when they judge probabilities because they use heuristics’.
Happily, conclusions like these do not hold up. This is because the tuning that is necessary for constructing problems that allow strong inference on processing questions is systematically misleading when it comes to asking evaluative questions. For example, consider the letter R problem. Why was R chosen for study and not, say, B? … Of the 20 possible consonants, 12 are more common in the first position and 8 are more common in the third position. All of the consonants that Kahneman and Tversky studied were taken from the third-position group even though there are more consonants in the first-position group.
The selection of consonants was not malicious. Their use is dictated by the strong inference logic since only they yield unambiguous answers to the processing question. In other words, when a subject says that R occurs more frequently in the first position, we know that he or she must be basing the judgment on availability, since the actual frequency information would lead to the opposite conclusion. Had we used B, instead, and had the subject also judged it to occur more often in the first position, we would not be able to tell whether the judgment reflect availability or factual knowledge since B is, in fact, more likely to occur in the first position.
We see, then, that the experimental logic constrains the interpretation of the data. We can conclude that people use heuristics instead of probability theory but we cannot conclude that their judgments are generally poor. All the same, it is the latter, unwarranted conclusion that is most often conveyed by this literature, particularly in settings outside psychology.
Lopes then turns her attention onto Kahneman and Tversky’s famous Science article.
In the original experimental reports, there is plenty of language to suggest that human judgments are often wrong, but the exposition focuses mostly on the delineation of process. In the Science article, however, Tversky and Kahneman (1974) shift their attention from heuristic processing to biased processing. In the introduction they tell us: ‘This article shows that people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations’ (p. 1124). By the time we get to the discussion, however, the emphasis has changed. Now they say: ‘This article has been concerned with cognitive biases that stem from the reliance on judgmental heuristics’ (p. 1130).
Examination of the body of the paper shows that the retrospective account is the correct one: the paper is more concerned with biases than with heuristics even though the experiments bear more on heuristics than on biases.