A week of links

Links this week:

  1. Academic urban legends spreading through sloppy citation. In PhD land, I have constantly found myself following citation chains that don’t lead to what they claim.
  2. Some progress in the replication wars. I’ll post about some of the specific examples over coming months.
  3. The evolutionary emergence of property rights (ungated working paper). HT: Ben Southwood
  4. Attribute substitution in charities – the evaluability bias. HT: Alex Gyani
  5. Peter Turchin reviews Richard Wrangham’s Catching Fire: How Cooking Made Us Human.
  6. Arnold Kling on Nicholas Wade. Comments and pointer from here.
  7. Polygenic modelling in cattle breeding. Humans next.
  8. An interesting debate on Cato Unbound this month – the libertarian case for a basic income guarantee.

Not the jam study again

Go to any behavioural science conference, event or presentation, and there is a high probability you will hear about “the jam study”. Last week’s excellent MSiX was no exception, with at least three references I can recall. The story is wonderfully simple and I have, at times, been mildly sympathetic to the idea. However, it is time for this story to be retired or heavily qualified with the research that has occurred in the intervening years.

As a start, what is the jam study? In 2000, Mark Lepper and Sheena Iyengar published their findings (ungated pdf) about the response of consumers to displays of jam in an upmarket grocery store. Their paper also contained similar experiments involving choice of chocolate and essay questions, but those experiments have not gained the same reputation.

On two Saturdays, they set up tasting displays of either six or 24 jars of jam. Consumers could taste as many jams as they wished, and if they approached the tasting table, also received a $1 discount coupon to buy the jam. For attracting initial interest, the large display of 24 jams did a better job, with 60 per cent of people who passed the display stopping. Forty per cent stopped at the six jam display. But only three per cent of those who stopped at the 24 jam display purchased any of the jam, compared with almost 30 per cent who stopped at the six jam display.

This result has been one of the centrepieces of the argument that more choice is not necessarily good. The larger display seemed to reduce consumer motivation to buy the product. The theories around this concept and the associated idea that more choice does not make us happy are often labelled the choice overload hypothesis or the paradox of choice.

Fast-forward 10 years to another paper, this one by Benjamin Scheibehenne, Rainer Greifeneder and Peter Todd. They surveyed the literature on the choice overload hypothesis – there is plenty. And across the basket of studies, evidence of choice overload does not emerge so clearly. In some cases, choice increases purchases. In others it reduces them. Scheibehenne and friends determined that the mean effect size of changing the number of choices across the studies was effectively zero.

More pointedly, the reviewed studies included a few attempts to replicate the jam study results. An experiment using jam in an upscale German supermarket found no effect. Other experiments found no effect of choice size using chocolates or jelly beans. There were small differences in study design between these and the original jam study (as original authors are always quick to point out when replications fail), but if studies are so sensitive to study design and hard to replicate, it seems foolhardy to extrapolate the results of the original study too far.

That is not to say that there is not something interesting going on here. Scheibehenne and friends suggest that there may be a set of restrictive conditions under which choice overload occurs. These conditions might involve the complexity (and not the size) of the choice, the lack of dominant alternatives, assortment of options, time pressure or the distribution of product quality. These considerations are not issues of the size of the choice itself but the way the choice is undertaken. And since the jam study appears tough to replicate, these conditions might be particularly narrow. Still, the common refrain of making it easy for customers – as recommended for dealing with choice overload issues – holds for most of them. But they suggest different and more subtle solutions than simply reducing choice.

There are a lot of interesting studies floating around on choice overload – from decisions about turning off life-support (ungated pdf) to retirement savings (ungated pdf) – and the message is not always the same. But reading through them makes it clear that the jam study is just the tip of an iceberg and not necessarily representative of what lies beneath.

Finally, when Tim Harford wrote about these studies several years ago, he pointed out another often neglected argument about the importance of choice. It is only because we have choice that we are offered any good products at all, with companies incentivised to compete for us as customers. Even if choice has negative consequences, a world without choice might be worse.

A week of links

Links this week:

  1. Some gold from Robert Sapolsky – what is going on in teenage brains? Plus a bonus interview.
  2. The latest issue of Nautilus (the source of the Sapolsky material) contains a lot of other good material – fruit and vegetables trying to kill you and chaos in the brain among them. I recommend scanning the table of contents.
  3. The changing dynamics of marriage inequality.
  4. Andrea Castillo with an introduction to the neoreaction (including some “homebrewed evolutionists”).
  5. Geoffrey Miller has teamed up with Tucker Max and is offering dating advice informed by evolution.

An MSiX reading list

Yesterday was day one of the Marketing Science Ideas Xchange (MSiX). As I mentioned in a previous post, it has been an interesting opportunity to see behavioural science outside of the academic and economics environments I am used to. There were a lot of interesting presentations, and a lot of good books were mentioned along the way.

First, a couple of blasts from the past: Claude Hopkins’s Scientific Advertising (if the one dollar Amazon price is prohibitive, it doesn’t take much searching to find some free pdf versions) and Vance Packard’s The Hidden Persuaders. The idea of injecting more science into advertising is not new.

The usual behavioural science texts got plenty of mentions, particularly Daniel Kahneman’s Thinking, Fast and Slow. System 1 and System 2 thinking were regular frames for the speakers (and in today’s workshops). Richard Thaler and Cass Sunstein’s Nudge and Dan Ariely’s Predictably Irrational also got the expected mentions.

seabrightThe first three speakers had an evolutionary thread in parts of their talks (nice to see), so naturally a few books I have plugged before came up. Rory Sutherland put up his reading list from Verge, which includes Paul Seabright’s The Company of Strangers, Jonathan Haidt’s The Righteous Mind, Robert Kurzban’s Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind and Robert Frank’s The Darwin Economy. All highly recommended, as is the rest of Sutherland’s reading pile, although I haven’t read Stuart Sutherland’s Irrationality: the enemy within, which I suppose will get added to my list.

Another book I had not come across before was Iain McGilchrist’s The Master and His Emissary: The Divided Brain and the Making of the Western World, which looks interesting.

Uri Gneezy and John List got a solid mention from the last presenter, Liam Smith from Monash University’s BehaviourWorks Australia. Gneezy and List’s new book The Why Axis: Hidden Motives and the Undiscovered Economics of Everyday Life is also sitting on my reading pile.

Outside of the presentations, a few other interesting books came up in conversation. They included Jim Manzi’s Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society, which should be on your reading list. One of my favourite books, Christopher Buckley’s Thank You for Smoking was also mentioned, which was unsurprising considering the potential sin industry clients of many of the conference attendees – and if you do read it, rip out the last couple of pages. While Barry Schwartz’s book The Paradox of Choice was not specifically mentioned, the phrase was regularly used.

FerrierFinally, Adam Ferrier, the conference curator, has a book out – The Advertising Effect: How to Change Behaviour. After organising a conference myself earlier in the year, I feel for him – many rewards but so much effort.

Gigerenzer versus nudge

Since I first came across it, I have been a fan of Gerd Gigerenzer’s work. But I have always been slightly perplexed by the effort he expends framing his work in opposition to behavioural science and “nudges”. Most behavioural science aficionados who are aware of Gigerenzer’s work are fans of it, and you can appreciate behavioural science and Gigerenzer without suffering from two conflicting ideas in your mind.

In a recent LSE lecture about his new book Risk Savvy: How to Make Good Decisions (which sits unread in my reading pile), Gigerenzer again has a few swipes at Daniel Kahneman and friends. The blurb for the podcast gives a taste. A set of coercive government interventions are listed, none of which are nudges, and it is suggested that we need risk savvy citizens who won’t be scared into surrendering their freedom. Slotted between these is the suggestion that some people see a need for “nudging”.

Gigerenzer does provide a different angle to the behavioural science agenda. His work has provided strong evidence for the accuracy of heuristics and shown that many of our so-called irrational decisions make sense from the perspective of the environment where they were designed (evolved). But his work doesn’t undermine the fact that many decisions are made outside of the environment where they originated – those fast, frugal and well-shaped heuristics have not stopped us getting fat, spending huge amounts on unused gym memberships and failing to save for retirement. Gigerenzer’s work provides depth to the behavioural analysis, rather than undermining it, and points to a richer set of potential solutions.

When Gigerenzer starts throwing around solutions, the difference between his approach and nudging becomes even hazier. In the LSE lecture he suggests that doctors be trained to present risks to patients in a certain way. That doesn’t seem much different from a typical nudge, although here it is the previously statistically-illiterate doctors presenting the information that nudges the patient behaviour.

One other interesting point in the lecture is when Gigerenzer speaks about the failure of breast cancer screening to cut deaths, and the presentation of results in deceptive ways designed to increase screening rates. He proposes presenting information by showing natural frequencies, which would likely reduce the rate of screening. But what of screening that doesn’t have deleterious side-effects for false positives of the same scale as breast cancer? Should they presented as Gigerenzer proposes, or in alternative ways more likely to induce screening? There has been no shortage of work in behavioural science designed to increase screening rates, particularly given the other biases and barriers that need to be overcome. I prefer Gigerenzer’s approach, but can see the arguments that would be mounted for the other side.

Otherwise, Gigerenzer’s speech channels Nassim Taleb on financial markets, before hinting at some of the very interesting work he is doing with the Bank of England. It’s generally worth a listen.

A week of links

Links this week:

  1. Detecting irrational exuberance in the brain – neuroeconomists confirm Warren Buffett’s wisdom (original article here).
  2. Spouses are more genetically similar than people chosen at random, but they are far more similar in education (ungated pdf).
  3. A well established fact, but further evidence that impatient adolescents do worse later in life.
  4. Homo Oeconomicus Versus Homo Socialis – an interesting looking conference at ETHZ.
  5. Evolution in the social sciences – a special issue of PNAS.

Our visual system predicts the future

I am reading John Coates’s thus far excellent The Hour Between Dog and Wolf: How Risk Taking Transforms Us, Body and Mind. There are many highlights and interesting pieces, the below being one of them.

First, we do not see in real-time:

When light hits out retina, the photons must be translated into a chemical signal, and then into an electrical signal that can be carried along nerve fibers. The electrical signal must then travel to the very back of the brain, to an area called the visual cortex, and then project forward again, along two separate pathways, one processing the identity of the objects we see, the “what” stream, as some researchers call it, and the other processing the location and motion of the objects, the “where” stream. These streams must then combine to form a unified image, and only then does this stream emerge into conscious awareness. The whole process is a surprisingly slow one, taking … up to one tenth of a second. Such a delay, though brief, leaves us constantly one step behind events.

So how does our body deal with this problem? How could you catch a ball or dodge a projectile if your vision is behind time?

[T]he brains visual circuits have devised an ingenious way of helping us. The brain anticipates the actual location of the object, and moves the visual image we end up seeing to this hypothetical new location. In other words, your visual system fast forwards what you see.

Very cool concept, but how would you show this?

Neuroscientists … have recorded the visual fast-forwarding by means of an experiment investigating what is called the “flash-lag effect.” In this experiment a person is shown an object, say a blue circle, with another circle inside it, a yellow one. The small yellow circle flashes on and off, so what you see is a blue circle with a yellow circle blinking inside it. Then the blue circle with the yellow one inside starts moving around your computer screen. What you should see is a moving blue circle with a blinking yellow one inside it. But you do not. Instead you see a blue circle moving around the screen with a blinking yellow circle trailing about a quarter of an inch behind it. What is going on is this: while the blue circle is moving, your brain advances the image to its anticipated actual location, given the one-tenth-of-a-second time lag between viewing it and being aware of it. But the yellow circle, blinking on and off, cannot be anticipated, so it is not advanced. It thus appears to be left behind by the fast-forwarded blue circle.

A quick scan of the Wikipedia page on the flash-lag effect suggests there are a few competing explanations, but it’s an interesting idea all the same. It would explain that feeling of disbelief when a batter swings at and misses a ball that moves unexpectedly in the air. They would have seen it in precisely the place they swung.

The below video provides a visual illustration.

A week of links

Links this week:

  1. Why idiots succeed.
  2. Rory Sutherland on social norms.
  3. Economics incentives versus nudge (pdf). Don’t forget that basic economic mechanisms can work.
  4. We’re related to our friends.
  5. Are there really trillion dollar bills on the sidewalk?
  6. A bash of the Myers-Briggs test. Personally, I’m a fan of the big five plus g. On g, the heritability of chimp IQ.
  7. Talent versus practice. Talent wins this one.
  8. Throwing away money on brain science.

The wisdom of crowds of people who don’t believe in the wisdom of crowds

MIT Technology reports new research on the “wisdom of the confident”:

It turns out that if a crowd offers a wide range of independent estimates, then it is more likely to be wise. But if members of the crowd are influenced in the same way, for example by each other or by some external factor, then they tend to converge on a biased estimate. In this case, the crowd is likely to be stupid.

Today, Gabriel Madirolas and Gonzalo De Polavieja at the Cajal Institute in Madrid, Spain, say they found a way to analyze the answers from a crowd which allows them to remove this kind of bias and so settle on a wiser answer.

… Their idea is that some people are more strongly influenced by additional information than others who are confident in their own opinion. So identifying these more strongly influenced people and separating them from the independent thinkers creates two different groups. The group of independent thinkers is then more likely to give a wise estimate. Or put another way, ignore the wisdom of the crowd in favor of the wisdom of the confident.

To test this result, they eliminated those who updated their estimates based on that of the crowd:

Madirolas and De Polavieja began by studying the data from an earlier set of experiments in which groups of people were given tasks such as to estimate the length of the border between Switzerland and Italy, the correct answer being 734 kilometers.

After one task, some groups were shown the combined estimates of other groups before beginning their second task. These experiments clearly showed how this information biased the answers from these groups in their second tasks. …

That allows them to divide the groups into independent thinkers and biased thinkers. Taking the collective opinion of the independent thinkers then gives a much more accurate estimate of the length of the border.

The funny thing about this research is that anyone who believes in the wisdom of crowds and updates their belief based on that collective wisdom is then excluded from the collective estimate. The wisdom of crowds needs someone who trusts their own opinion more than that of the crowd. It is similar to the efficient markets hypothesis relying on those who don’t believe in it – if everyone believed markets were efficient, no one would invest effort in finding and acting on information that might affect market prices. That effort is what allows prices to reflect this information.

So who are the confident people who form this more accurate estimate? The Dunning-Kruger effect tells us that the unskilled will be overconfident as they don’t have the cognitive skills to recognise their ineptitude. But despite this effect, the more skilled do tend to be more confident than the unskilled – just not by as much as the skill gap warrants. As a result, eliminating the less confident can still cut the least skilled.

The behaviour genetics to eugenics to Nazi manoeuvre

Recently, I’ve tended to roll my eyes rather than respond to poor commentary on behaviour genetics. But a review by Kate Douglas at New Scientist, in which she pulls the behaviour genetics to eugenics to Nazi manoeuvre, has pointed out a potentially interesting book.

First, from the conclusion to Douglas’s review (actually, not so much a review but a launchpad):

Behaviour geneticists came to see finding high heritability as a justification for their work. But heredity changes depending on the environment. Grow those tomatoes in a regulated greenhouse and almost all the difference in their height will be thanks to their genes; grow them on a sloping, partly shaded field and the effect of heritability is lower.

Nature and nurture are not distinct, and the complexity of their interactions is increasingly apparent in this genomic age. Heritability can’t even be reliably estimated in humans using twin and adoption studies, the method of choice for behaviour geneticists.

All this undermines the supposition that heritability tells us about the cause of a behaviour. In fact, heritability is almost entirely meaningless.

I haven’t yet met a behaviour geneticist who doesn’t understand that heritability can vary with environment. Just look at Eric Turkheimer and friends’ work on heritability of IQ among different socioeconomic groups. The change in heritability across environment tells us something. And if heritability measures are robust across environments (as it is for IQ for most socioeconomic groups), that tells you something too.

But moving on, Douglas’s review is of a new book, Misbehaving Science: Controversy and the Development of Behavior Genetics by Aaron Panofsky. The blurb on the book suggests it might be better than the review, and could contain some interesting history on the debates in the field.

Behavior genetics has always been a breeding ground for controversies. From the “criminal chromosome” to the “gay gene,” claims about the influence of genes like these have led to often vitriolic national debates about race, class, and inequality. Many behavior geneticists have encountered accusations of racism and have had their scientific authority and credibility questioned, ruining reputations, and threatening their access to coveted resources.

In Misbehaving Science, Aaron Panofsky traces the field of behavior genetics back to its origins in the 1950s, telling the story through close looks at five major controversies. In the process, Panofsky argues that persistent, ungovernable controversy in behavior genetics is due to the broken hierarchies within the field. All authority and scientific norms are questioned, while the absence of unanimously accepted methods and theories leaves a foundationless field, where disorder is ongoing. Critics charge behavior geneticists with political motivations; champions say they merely follow the data where they lead. But Panofsky shows how pragmatic coping with repeated controversies drives their scientific actions. Ironically, behavior geneticists’ struggles for scientific authority and efforts to deal with the threats to their legitimacy and autonomy have made controversy inevitable—and in some ways essential—to the study of behavior genetics.