Dealing with algorithm aversion
Over at Behavioral Scientist is my latest contribution. From the intro:
The first American astronauts were recruited from the ranks of test pilots, largely due to convenience. As Tom Wolfe describes in his incredible book The Right Stuff, radar operators might have been better suited to the passive observation required in the largely automated Mercury space capsules. But the test pilots were readily available, had the required security clearances, and could be ordered to report to duty.
Test pilot Al Shepherd, the first American in space, did little during his first, 15-minute flight beyond being observed by cameras and a rectal thermometer (more on the “little” he did do later). Pilots rejected by Project Mercury dubbed Shepherd “spam in a can.”
Other pilots were quick to note that “a monkey’s gonna make the first flight.” Well, not quite a monkey. Before Shepherd, the first to fly in the Mercury space capsule was a chimpanzee named Ham, only 18 months removed from his West African home. Ham performed with aplomb.
But test pilots are not the type to like relinquishing control. The seven Mercury astronauts felt uncomfortable filling a role that could be performed by a chimp (or spam). Thus started the astronauts’ quest to gain more control over the flight and to make their function more akin to that of a pilot. A battle for decision-making authority—man versus automated decision aid—had begun.
Head on over to Behavioral Scientist to read the rest.
While the article draws quite heavily on Tom Wolfe’s The Right Stuff, the use of the story of the Mercury astronauts was somewhat inspired by Charles Perrow’s Normal Accidents. Perrow looks at the two sides of the problems that emerged during the Mercury missions - the operator error, which formed the opening of my article, and the designer error, which features in the close.
One issue that became apparent to me during drafting was the distinction between an algorithm determining a course of action, and the execution of that action through mechanical, electronic or other means. The example of the first space flights clearly has this issue. Many of the problems were not that the basic calculations (the algorithms) were faulty. Rather, the execution failed. In early drafts of the article I tried to draw this distinction out, but it made the article clunky. I ultimately reduced this point to a mention in the close. It’s something I might explore at a later time, because I suspect “algorithm aversion” when applied to self-driving cars relates to both decision making and execution.
Another issue that became stark was the limit of the superiority of algorithms. In the first draft, I did not return to the Mercury missions for the close. It was a easy to talk of bumbling humans in the first space flights and how to guide them toward better use of algorithms. But that story was too neat, particularly given the particular example I had chosen. During the early flights there were plenty of times where the astronauts had to step in and save themselves. Perhaps if I had used a medical diagnosis or more typical decision scenario in the opening I could have written a cleaner article.
Regardless, the mix of operator and designer error (to use Perrow’s framing) has led me down a path of exploring how to use algorithms when the decision is idiosyncratic or is being made in a less developed system. The early space flights are one example, but strategic business decisions might be another. What is the right balance of algorithms and humans there? At this point, I’m planning for that to be the focus of my next Behavioral Scientist piece.