Dealing with algorithm aversion

Over at Behavioral Scientist is my latest contribution. From the intro:

The first American astronauts were recruited from the ranks of test pilots, largely due to convenience. As Tom Wolfe describes in his incredible book The Right Stuff, radar operators might have been better suited to the passive observation required in the largely automated Mercury space capsules. But the test pilots were readily available, had the required security clearances, and could be ordered to report to duty.

Test pilot Al Shepherd, the first American in space, did little during his first, 15-minute flight beyond being observed by cameras and a rectal thermometer (more on the “little” he did do later). Pilots rejected by Project Mercury dubbed Shepherd “spam in a can.”

Ham_the_chimp_(cropped)

Astronaut Ham.

Other pilots were quick to note that “a monkey’s gonna make the first flight.” Well, not quite a monkey. Before Shepherd, the first to fly in the Mercury space capsule was a chimpanzee named Ham, only 18 months removed from his West African home. Ham performed with aplomb.

But test pilots are not the type to like relinquishing control. The seven Mercury astronauts felt uncomfortable filling a role that could be performed by a chimp (or spam). Thus started the astronauts’ quest to gain more control over the flight and to make their function more akin to that of a pilot. A battle for decision-making authority—man versus automated decision aid—had begun.

Head on over to Behavioral Scientist to read the rest.

While the article draws quite heavily on Tom Wolfe’s The Right Stuff, the use of the story of the Mercury astronauts was somewhat inspired by Charles Perrow’s Normal Accidents. Perrow looks at the two sides of the problems that emerged during the Mercury missions – the operator error, which formed the opening of my article, and the designer error, which features in the close.

One issue that became apparent to me during drafting was the distinction between an algorithm determining a course of action, and the execution of that action through mechanical, electronic or other means. The example of the first space flights clearly has this issue. Many of the problems were not that the basic calculations (the algorithms) were faulty. Rather, the execution failed. In early drafts of the article I tried to draw this distinction out, but it made the article clunky. I ultimately reduced this point to a mention in the close. It’s something I might explore at a later time, because I suspect “algorithm aversion” when applied to self-driving cars relates to both decision making and execution.

Another issue that became stark was the limit of the superiority of algorithms. In the first draft, I did not return to the Mercury missions for the close. It was a easy to talk of bumbling humans in the first space flights and how to guide them toward better use of algorithms. But that story was too neat, particularly given the particular example I had chosen. During the early flights there were plenty of times where the astronauts had to step in and save themselves. Perhaps if I had used a medical diagnosis or more typical decision scenario in the opening I could have written a cleaner article.

Regardless, the mix of operator and designer error (to use Perrow’s framing) has led me down a path of exploring how to use algorithms when the decision is idiosyncratic or is being made in a less developed system. The early space flights are one example, but strategic business decisions might be another. What is the right balance of algorithms and humans there? At this point, I’m planning for that to be the focus of my next Behavioral Scientist piece.

5 comments

  1. My issue is that I like to drive. I like to interact with the world and make decisions. I don’t want to be just a passenger.

    Let’s pretend that there was a system that could out perform any trader. Should all traders turn over their accounts to the system and go sit in their back yards. They should, but could you walk away from watching the market.

    If a computer can play a game better than a human, next time we get together should we all turn our cards over to the computer and let it play the game?

    I want to play the game! … but computers can do it better.

    What do we need people for?

    I’m sorry, did you want the red pill or the blue bill (the Matrix)

    1. Scott Crossfield was not an astronaut. Do you mean Scott Carpenter?
      While you may be educated in behavioral sciences and statistics, you are clearly deficient in the fields of engineering and NASA (or any other “test systems”) history. Automated systems failed then as they do now. Programming errors abound: Ariane 5. Solar vs. Sidereal time: Gemini V. Your historical information is incorrect as well and lacks engineering information. The main “behavior” I learned about in reading this article was that people can be lazy in their research. The balance of the information may have extraordinary merit. In other words, systems are flawed and complicated. I wonder whether we should consider consider an analogy to incompleteness to this.

      1. Cheers for the Crossfield/Carpenter pick up. I repeated the mistake from Normal Accidents (although I should have clicked having read The Right Stuff).

        On automated systems, did you make it to the end of the article?

  2. The marine industry has an exampkle if algiruthms in cintrol with a long history of struggles with the issues around intervention by operators. This us with ships Dynamic Positioning Systens. These are computer systems designed to control the shio to stat stationary.
    As computers become more reliable issues with human factors are becoming more important.

  3. I don’t have time to track it down, but I have seen research indicating that part of the reason people fear dying in a plane crash more than a car accident is that when they picture themselves in both situations, they feel like they have no input/effect on the plane crash, but have some on the car accident. The feeling of having no control over the hypothetical emergency is what freaks them out. Even if the odds say manual is more dangerous, people want to feel like they have a shot at a last second miracle, at least.
    I think this ties into what you said about having a limited “manual override”.

Comments welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s