The blogs I read

Although RSS seems to be on the way out, I’ve found myself explaining feed readers to a few people recently. They asked for some suggestions of blogs to follow, so below are some from my reading list.

I try not to live in a bubble, but you can see a libertarian bent to these recommendations. My full reading list (as at 4 January 2015) is here – unzip and upload it into your favourite feed reader – and is a bit broader than the below might suggest.

Statistical Modeling, Causal Inference, and Social Science: My favourite blog. Regularly skewers statistical papers of all types. I’ve learnt more about the practical use of statistics from Andrew Gelman than I have in any statistics or econometrics class.

Offsetting Behaviour: Eric Crampton’s regular dismantling of those who want to protect us from ourselves is always worth reading.

Gene Expression: Still the best evolutionary biology and genetics blog.

Bleeding Heart Libertarians: The blog at which I feel most at home politically.

Econlog: I have only Bryan Caplan’s posts in my feed, although Caplan is possibly the most infuriating thinker I regularly read.

Askblog: Arnold Kling’s post-Econlog blog is always a source of sharp comment on interesting material.

Marginal Revolution: One of the most popular economics sites, but possibly the best aggregator of interesting content.

Econtalk: Not a blog but a podcast. Russ Roberts has an impressive guest list and is rarely dull. There is a massive back catalogue worth working through.

Club Troppo: A centrist Australian political blog. I don’t have any Australian “libertarian” or “free market” blogs in my feed, as they are generally horrible – conservative at best (rare), corporatist at worst, with posts closer to trolling than informative and comment sections that make the eyes bleed.

Information Processing: Stephen Hsu provides plenty of material at the cutting edge of research into genetics and intelligence.

Santa Fe Institute News: The best feed of complexity related stories and ideas.

Matt Ridley’s Blog: Hit and miss (a bit like The Rational Optimist), but more than enough good material.

The Enlightened Economist: A constant source of additions to my book reading list.

Self evident but unexplored – how genetic effects vary over time

A new paper in PNAS reports on how the effect of a variant of a gene called FTO varies over time. Previous research has shown that people with two copies of a particular FTO variant are on average three kilograms heavier than those with none. But this was not always the case. I’ll let Carl Zimmer provide the background:

In 1948, researchers enlisted over 5,000 people in Framingham, Mass., and began to follow their health. In 1971, the so-called Framingham Heart Study also recruited many of the children of original subjects, and in 2002, the grandchildren joined in. In addition to such data as body mass index, the researchers have been gathering information on the genes of their subjects.

The scientists compared Framingham subjects with the risky variant of FTO to those with the healthy variant. Over all, the scientists confirmed the longstanding finding that people with the risky FTO variant got heavier.

People born before the early 1940s were not at additional risk of putting on weight if they had the risky variant of FTO. Only subjects born in later years had a greater risk. And the more recently they were born, the scientists found, the greater the gene’s effect.

Some change in the way people lived in the late 20th century may have transformed FTO into a gene with a big impact on the risk of obesity, the researchers theorized.

This result is unsurprising. It is standard knowledge that genetic effects can vary with environment, and when it comes to factors likely to influence obesity such as diet, there have been massive changes to the environment over time. As the authors of the PNAS paper note:

This idea, that genetic effects could vary by geographic or temporal context is somewhat self-evident, yet has been relatively unexplored and raises the question of whether some association results and genetic risk estimates may be less stable than we might hope.

It is great that the authors have provided a particular example of this change. And also useful, the study provides another response to the claim genetics is not relevant to increases in obesity because there has been limited genetic change since levels of obesity took off. The high heritability of obesity has always pointed to the relevance of genetics, but this paper strengthens the case.

In his NY Times piece, Carl Zimmer quotes study co-author Nicholas Christakis on whether the changing role of genes may be a more general phenomenon:

Dr. Nicholas A. Christakis, a sociologist and physician at Yale University and a co-author of the new study, suggested that the influence of many other genes on health had waxed and waned over the past century. Reconstructing this history could drastically influence the way doctors predict disease risk. What might look like a safe version of a gene today could someday become a risk factor.

“The thing we think is fixed may not be fixed at all,” said Dr. Christakis.

I have written before about another example of the changing effect of genes over time – the effect of genes on fertility.

Before the demographic transition when fertility rates plunged in the world’s developed countries, the heritability of fertility was around zero. This is unsurprising as any genetic variation in fitness is quickly eliminated by natural selection.

But when you look at the heritability of fertility after the demographic transition, things have changed. The heritability of fertility as derived from twin studies is around 0.2 to 0.4. That is, 20 to 40 per cent of the variation in fertility is due to genetic variation. People with different genes have responded to changes in environment in different ways.

The non-zero heritability of fertility has some interesting implications for long-term fertility. My working paper outlines the research on the heritability of fertility in discussing these long-term implications. I have posted on the working paper here.

A week of links

Links this week:

  1. In praise of complexity economics.
  2. Books coming out in 2015.
  3. Two links from the world of intellectual property madness – what would have entered the public domain on 1 January under the old copyright regime (HT: John Bergmayer), and Uber seeks to patent the idea of pricing based on supply and demand (HT: Ben Walsh).
  4. Test social programs so we know that they work. But also make sure that you use decent sample sizes, don’t over-generalise the results and replicate.
  5. Missing heritability – more evidence that the gap will be filled.
  6. Participation in warfare leads to reproductive success (HT: Razib Khan).
  7. Applying statistical thinking to education.

Best books I read in 2014

Continuing my tradition of giving the best books I read in the year – generally released in other years – the best books I read in 2014 are below (albeit from a smaller pool than usual).

CoatesJohn Coates’s (2012)  The Hour Between Dog and Wolf: Risk Taking, Gut Feelings and the Biology of Boom and Bust: The best book I read this year. An innovative consideration of how human biology affects financial decision making.
The Son Also RisesGregory Clark’s (2013) The Son Also Rises: Surnames and the History of Social Mobility: Not the most exciting book to read, but an important examination of social mobility.
Complexity and the Art of Public PolicyDavid Colander and Roland Kupers’s (2014) Complexity and the Art of Public Policy: An important way to think about policy, even though I’m not convinced by many of their proposed applications.
Rationality for MortalsGerd Gigerenzer’s (2010) Rationality for Mortals: How People Cope with Uncertainty: Essays capturing a lot of Gigerenzer’s important work. I reread it following his disappointing Risk Savvy. I didn’t write a review, but posted about parts of the book here, here and here.
ScarcitySendhil Mullainathan and Eldar Shafir’s (2013) Scarcity: Why Having Too Little Means So Much: A novel way of looking at scarcity that extends beyond the typical analysis in economics, but I’m not convinced it presents a coherent new perspective on how the world works.
Young men in spatsP.G. Wodehouse’s (1936) Young Men in Spats: Magic.

Complexity and the Art of Public Policy

Complexity and the Art of Public PolicyThe basis of David Colander and Roland Kupers’s book Complexity and the Art of Public Policy: Solving Society’s Problems from the Bottom Up is that the economy is a complex system and it should be examined through a complexity frame.

A complex system comprises many parts that interact in a nonlinear manner. You can’t simply add the parts. While you might expect this would lead to chaos, emergent behaviour in complex systems can lead to what appears to be an order state. Outcomes in complex systems are inherently hard to predict. Small changes in initial conditions can have massive effects on outcomes. Any tweak to the system can cascade through the system in a myriad of ways. Determining the web of interactions with precision is impossible, so despite the abundance of experts making economic arguments as though they know what they’re doing, they usually don’t.

The complexity frame does away with a lot of the assumptions built into policy analysis, including:

It doesn’t assume people are hyper rational; it doesn’t assume system dynamics are linear; it doesn’t assume tastes are unaffected by process; it doesn’t assume government can control; it doesn’t assume the competitive market can somehow exist independently of government and other social natural systems; it doesn’t assume the institutional structure is fixed. Through what it doesn’t assume, the complexity policy frame changes the nature of the policy discussion.

In general, I am a fan of this argument. It leads to a more humble approach to policy. And despite the suggestion that prediction is hard in such systems, complexity science does provide some insight.

We are not suggesting that society should resign itself to a fatalistic relativist position by concluding that since everything is complicated; you just have to fall back to your subjective judgment. That’s not what we mean. We advocate setting the bar substantially higher, with the idea of educated common sense. Educated common sense involves an awareness of the limitations of our knowledge that is inherent in the complexity frame. A central argument of this book is that with complexity now being several decades old as a discipline (and much older as a sensibility), policy that ignores this frame fails the educated common sense standard.

Importantly, the complexity frame provides a new perspective on government. Rather than the question being one of government versus markets, both top-down government solutions and bottom-up market solutions are seen as having evolved from the bottom up. The government solution is an element of the system, not outside it, with government a (crude) bottom-up solution to previous problems.

On that basis, the proper role of government is not to implement the government’s will, but rather the people’s will through governmental institutions built to solve collective action problems and provide the institutional environment for people to solve problems themselves. People can solve their problems in the right environment. Colander and Kupers write:

[The complexity approach to policy] assumes that individuals are smart and adaptive, and if responsibility is given to them, they will use it to avoid problems for themselves and to design institutions that achieve the goals they want. But they will do so only if they are given the chance and only in the appropriate institutional environment. To “be given a chance” means that government does not “solve” the problem prematurely. In that view the complexity approach to policy is similar to the market fundamentalist view. But it differs from the market fundamentalist position in two crucial ways: First, it sees individuals as concerned with much more than material welfare, their aspirations extending to broader social welfare improvements. Second, while the market fundamentalist position assumes that the market will enable an optimal solution, the complexity frame recognizes that there may well be lock-ins, emergent collective effects, or market failures that need to be willfully overcome via collective action to allow the economy to move to a more desirable basin of attraction.

Although I broadly agree, you can see the caricature of the ‘market fundamentalist” creeping in – the implication that ‘market fundamentalists’ believe people only care about material welfare. A degree of caricature is present through much of the book. However, the second point is important. Emergent outcomes may not necessarily be optimal, although in the right institutional environment they often are.

One example where a successful institutional structure was established concerns an intersection in the town of Drachten in the Netherlands. This intersection has no traffic lights, no sidewalks, no stop signs and no traffic directions from police. The result is slightly faster traffic flow and fewer accidents. The appropriate institutional environment was one in which users were mixed and took care in their approach to the intersection, but were free to use judgment as to how to approach it.

Despite being distinguished from the market fundamentalist view, there is a laissez-faire bent to the complexity approach – although as Colander and Kupers explain, their use of “laissez-faire” better matches its historical origins than some current uses. Colander and Kupers also suggest that greater direct government intervention reduces the success of the system getting the bottom-up “ecostructure” right.

Having said I am a fan of a complexity approach, the way Colander and Kupers make their case is repetitive. They spend the first two-thirds of the book placing the complexity frame between the opposing viewpoints of free markets and government control, and suggest that a complexity approach can rise above politics as it assumes neither government or free markets are superior (Eric Beinhocker tries the same trick in The Origin of Wealth). This claim is somewhat naïve, and I expect that adoption of the complexity frame will simply add a new tool to the old battle (as used by Paul Ormerod in Why Most Things Fail). The authors also spend a lot of time contrasting the complexity frame with what they call the standard policy frame – also somewhat repetitive – although I do suspect this is ultimately the greatest contribution that will come from complexity science.

One important piece of the complexity frame is norms. Norms are strong shapers of behaviour and outcomes. Turning back to the traffic example in Drachten, norms around watching out for pedestrians and the appropriate way to drive through a mixed use space are likely important. Trust is also an essential enabler. An influx of foreign drivers with different norms and values could result in the experiment breaking down. As an example of different norms, Colander and Kupers note that traffic merging varies between Germany and the Netherlands, meaning that road design might need to vary between two apparently similar cultures.

A further twist to the complexity frame is the assumption that norms and culture are not constant, and tastes are not necessarily well formed. The activities that people undertake can feedback on their wants and shape them as a person. The complexity frame shows that what the collective wants as its taste and norms and what is has as its tastes and norms can differ, and “both society and individuals can know that some of the tastes and norms they have are not the tastes and norms they want”.

This mismatch leads Colander and Kupers to advocate what they call norms policy. Rather than small changes in environment to affect people’s choices (the territory of behavioural economics and ‘nudges’), they propose supernudges – institutional changes that change and feed back on people’s tastes and norms. The policy discussion should be on how tastes evolve, change and can be influenced.

Their advocacy of norms policy has a starting assumption that tastes are arbitrary, which gives a degree of freedom in changing them. However, that starting point seems to have the same problem as the approach they critique, unsubstantiated assumptions. An evolutionary understanding of human behaviour might suggest some norms are rather robust.

Continuing on, they argue that as norms can be influenced by government action, government should tweak the environment so pro-social norms emerge.  One example they provide is that government should try to influence the tastes and norms relating to the materialistic nature of society. They suggest that “many would agree that in today’s Western societies material welfare is given more prevalence than most people would like” and that there is nothing normal about conspicuous consumption as a norm. However, as above, expensive signalling and status desires seem very much part of human nature.

A specific example relates to climate change. Colander and Kupers argue that if people have climate friendly tastes, there is little cost to dealing with the problem of climate change (I’m not sure that argument strictly holds – if I have BMW friendly tastes, there may still be a substantial cost of acquiring one). As a result, government should encourage tastes less likely to create global warming than other tastes.

To give a practical example, they point to a publication co-authored by Kupers that suggests climate change is an economic opportunity, and that bigger emission reduction targets can result in more investment growth and employment in Europe. They do state that they don’t know if will work, but they applaud it as useful experiment. But how would you ever know it worked in a complex system? And what are the costs of failure? Further, parts of the report look like standard economics junk, complete with estimates of GDP in 2020 to the nearest billion dollars.

Beyond this climate change example, norms policy hits me as problematic for two reasons. First, it smacks of naïvety about the nature of government (as does their suggestion that government could provide a positive role model). What tastes and norms would government like us to have (beyond voting for incumbents)? But more importantly, given that tastes and norms are part of a complex system, what confidence could government have that their actions would influence people in the right way? And how might those norms play out? Materialistic norms might just be the type of norms that allow many of our other non-materialistic preferences to be realised.

More broadly, they discuss the need to get the ecostructure right for positive norms or solutions to emerge. They consider this to be the area where the biggest gains from a complexity frame will be found.

One suggestion is facilitating the development of for-benefit firms, sparing social entrepreneurs from being shoehorned into the not-for-profit or profit categories. To the extent there are barriers from people forming firms with social objectives, it may be a useful idea. But once again, their starting position – that only for-benefit institutions can optimise overall welfare – appears somewhat strong. As a result, they worry about people using new forms of for-benefit institutions to benefit themselves, not society (is the for benefit label just a marketing tool?), which seems a misplaced concern.

Having said that, as they do for many of their suggestions, they acknowledge that for-benefit structures might not achieve positive outcomes. This is matched with their general (welcome) call for experimentation – although experimentation in a complex system is difficult.

For many  of the policy implications presented by Colander and Kupers, I was unclear how they related to the complexity frame. At times it felt as though they were attempting to claim any interesting idea as relating to complexity, be that prediction markets or prizes for innovation (see the notes at the bottom of this post for some examples). They even praise economists’ aim to find “the ideal instrumental variable and more generally torturing the data to make it tell a story,” despite the linearity embedded in most of these models. Complexity theory would suggest most of these results are useless.

Then there is also the occasional bashing of the standard policy model straw man. They claim that the standard policy model results in people thinking of government policy by default in terms of GDP, ignoring the mass of policy concerning equity or capability. They suggest resilience plays no part in the standard frame, ignoring the massive prudential regulatory apparatus applied to the banking system. They also provide an example of a massive investment in the Netherlands in constructing a water wall. People changed behaviour and moved into the protected areas, which makes cost of failure even higher – an event they suggest the standard policy frame ignores. But this is exactly what the standard frame would predict – change incentives, change behaviour.

Colander and Kupers also praise the emergence of behavioural economics, although they do criticise attempts to put these findings into the standard policy model. Their willingness to jump on the behavioural economics bandwagon does highlight one of the shortcomings of their approach – their failure to adopt an even deeper understanding of human behaviour as provided by evolutionary theory. It would provide a better framework on which to build their understanding of norms.

Overall, Complexity and the Art of Public Policy is a good book. However, the last third of the book did not convince me that complexity theory arms us with many new policy tools. A complexity frame punches holes in many of our methods of analysis and the policy options we put on the table using standard economic frameworks. But the very nature of complex systems makes it a challenge to propose options that we can claim with any confidence to have positive effect. Roland and Kupers have ventured into that area – which I am grateful for. I was glad to read a complexity book with an attempt to give some policy relevance without yet another description of the El Farol Bar problem. But the particular examples they provided leave me of the view that the strongest contribution of complexity theory will be to tell us when our standard economic policy tools will work, and when they won’t. That’s no small accomplishment, but in a world where we need to “do something”, it’s not an easy sell.

———

As I didn’t want to fill this review with minor complaints (as has been my bad habit recently), below are some additional notes that I took as I read the book.

Through the last third of the book I found myself constantly marking proposed applications of the complexity frame that did not seem to have much to do with complexity:

  • Colander and Kupers point to a 90 per cent reduction in plastic bag use in Ireland following introduction of a small tax as a discontinuity or tipping point, but in the presence of easy substitutes, it is not hard to explain in a standard economic framework.
  • They put granting prizes in the complexity frame, but why? It appears to involve standard incentives.
  • Prediction markets are placed complexity frame, but can also be accommodated in the standard policy frame. And why doesn’t the complexity frame suggest that sometimes the prediction markets will fail?
  • They propose charging fees for quasi-public goods where beneficiaries can be identified. Economics 101.
  • Their discussion of single payer versus competitive market in healthcare looks a lot like a standard policy debate. They state that the complexity framework shows the current setup is not ideal (does any?) and point to problems separating the buyer and payer (as standard as you might get).
  • They point to shorter duration copyright being better, and suggest that extensions to keep Mickey Mouse under copyright might be considered OK under the standard policy frame. It seems ridiculous under all frames to me.
  • They propose leasing government resources to fund social goods, rather than privatising. But what does the complexity frame show about this that a standard policy frame does not?
  • They propose charging for patents and copyright, pointing out that while it reduces financial incentives, people innovate for other motivations too. An interesting idea, but why is this a complexity frame? And on the merits, can you imagine the complexity of copyright law if government directly had its fingers in that pie? You would have the mess of copyright minus the incentive it is designed to provide.

On behavioural economics, they suggest the behavioural economists’ argument of less than perfect rationality undermines the traditional economic framework and all policy conclusions that flow from it. They also suggest that you can’t simply shoehorn behavioural economics into the standard economic model as it rests on different assumptions. This claim is a bit too strong – Vernon Smith’s experiments in markets suggest this. Their willingness to adopt the findings of behavioural economics is also an interesting fit with their belief that people are smart and adaptive, which they suggest is a precondition for the emergence of bottom-up solutions.

An evolutionary framework could inform some of their analysis of the existence of path dependency and lock-in of income. Contrast their suggestion that changing the ecostructure could change distribution of income with the work of Gregory Clark showing the failure of policy changes to affect social mobility. Path dependency in the income of children might simply reflect the existence of the same underlying (genetic) factors.

A week of links

Links this week:

  1. Lectures on Human Capital by Gary Becker. (HT: Eric Crampton)
  2. We learn more from success than failure. A bit semantic, and where is this government agency with a learn-from-failure culture? But worth the read.
  3. The 2014 Nanny State awards.
  4. Robert Sapolsky on the Christmas truce of 1914.
  5. People like gifts that they want.
  6. Greater contact between racial groups increases bias? (HT: Razib Khan)
  7. Another arena where a bit more science might help – the justice system.
  8. The Accidental Lobster Farmers (HT: Tyler Cowen).
  9. Nothing like an article confirming my priors – LaTeX is a productivity sink (HT: Alex Tabarrok). A common comment I receive from economists on my papers is “Did you do this in Word?” The funny thing with LaTeX is that most people don’t change the default font – if you did that, people might not know it was prepared with LaTeX.

A week of links

Links this week:

  1. I worry that most smart people have not learned that a list of dozens of studies, several meta-analyses, hundreds of experts, and expert surveys showing almost all academics support your thesis – can still be bullshit.” Awesome.
  2. I have only just realised that Gary Klein blogs at Psychology Today. A relatively recent post – The Insight Test.
  3. Bad statistics – same sex marriage edition.
  4. How to doctor a cost-benefit analysis.
  5. Replicating epigenetic claims.

Complexity versus chaos

Another clip from David Colander and Roland Kupers’s Complexity and the Art of Public Policy: Solving Society’s Problems from the Bottom Up – a nice description of how two often confused terms, complexity and chaos, differ and interrelate:

Chaos theory is a field of applied mathematics whose roots date back to the nineteenth century, to French mathematician Henri Poincaré. Poincaré was a prolific scientist and philosopher who contributed to an extraordinary range of disciplines; among his many accomplishments is Poincaré’s conjecture that deals with a famous problem in physics first formulated by Newton in the eighteenth century: the three body problem. The goal is to calculate the trajectories of three bodies, planets for example, which interact through gravity. Although the problem is seemingly simple, it turns out that the paths of the bodies are extraordinarily difficult to calculate and highly sensitive to the initial conditions.

One of the contributions of chaos theory is demonstrating that many dynamical systems are highly sensitive to initial conditions. The behavior is sometimes referred to as the butterfly effect. This refers to the idea that a butterfly flapping its wings in Brazil might precipitate a tornado in Texas. This evocative—if unrealistic—image conveys the notion that small differences in the initial conditions can lead to a wide range of outcomes.

Sensitivity to initial conditions has a number of implications for thinking about policy in such systems. For one, such an effect makes forecasting difficult, if not impossible, as you can’t link cause and effect. For another it means that it will be very hard to backward engineer the system—understanding it precisely from its attributes because only a set of precise attributes would actually lead to the result. How much time is spent on debating the cause of a social situation, when the answer might be that it simply is, for all practical purposes, unknowable? These systems are still deterministic in the sense that they can be in principle specified by a set of equations, but one cannot rely on solving those equations to understand what the system will do. This is known as deterministic chaos, but is mostly just called chaos.

While chaos theory is not complexity theory, it is closely related. It was in chaos theory where some of the analytic tools used in complexity science were first explored. Chaos theory is concerned with the special case of complex systems, where the emergent state of the system has no order whatsoever—and is literally chaotic. Imagine birds on the power line being disrupted by a loud noise and fluttering off in all directions. You can think of a system as being in these three different kinds of states, linear, complex, or chaotic—sitting on the line, flying in formation, or scrambling in all directions.

Like chaos theory, complexity theory is about nonlinear dynamical systems, but instead of looking at nonlinear systems that become chaotic, it focuses on a subset of nonlinear systems that somehow transition spontaneously into an ordered state. So order comes out of what should be chaos. The complexity vision is that these systems represent many of the ordered states that we observe—they have no controller and are describable not by mechanical metaphors but rather by evolutionary metaphors. This vision is central to complexity science and complexity policy.

I’ll post a full review next week.

More praise of mathematics

Following my post last week on the need for more complicated models in economics, a new paper in PLOS Biology argues for the importance of mathematical models in showing ‘proof of concept’ (HT: Santa Fe Institute News). The authors write:

Proof-of-concept models, used in many fields, test the validity of verbal chains of logic by laying out the specific assumptions mathematically. The results that follow from these assumptions emerge through the principles of mathematics, which reduces the possibility of logical errors at this step of the process. The appropriateness of the assumptions is critical, but once they are established, the mathematical analysis provides a precise mapping to their consequences.

They point to lack of trust many people have in mathematical models, but argue that once the theoretician fulfils their duty of making the robustness of the assumptions transparent, readers should take the results seriously.

Much of the doubt about the applicability of models may stem from a mistrust of the effects of logistical assumptions. It is the responsibility of the theoretician to make his or her knowledge of the robustness of these assumptions transparent to the reader; it may not always be obvious which assumptions are critical versus logistical, and whether the effects of the latter are known. It is likewise the responsibility of the empirically-minded reader to approach models with the same open mind that he or she would an experiment in an artificial setting, rather than immediately dismiss them because of the presence of logistical assumptions.

Several examples are provided in the paper, but my favourite example of models as ‘proof of concept’ relates to the handicap principle. I have posted about this model before (that time in the context of economists solving the problem 17 years before the biologists figured it out), so I will use some of my previous words.

[In 1975], Amotz Zahavi had a paper published titled Mate selection – a selection for a handicap. This paper spelt out Zahavi’s handicap principle, which described how honest signals of quality between animals could evolve. The signals are honest because they impose a handicap on the signaller that only a high quality signaller can bear.

The handicap principle was not accepted at first. Richard Dawkins wrote in an early edition of The Selfish Gene:

I do not believe this theory, although I am not quite so confident in my scepticism as I was when I first heard it. I pointed out then that the logical conclusion to it should be the evolution of males with only one leg and only one eye. Zahavi, who comes from Israel, instantly retorted: ‘Some of our best generals have only one eye!’ Nevertheless, the problem remains that the handicap theory seems to contain a basic contradiction. If the handicap is a genuine one-and it is of the essence of the theory that it has to be a genuine one-then the handicap itself will penalize the offspring just as surely as it may attract females. It is, in any case, important that the handicap must not be passed on to daughters.

John Maynard Smith published papers (such as this) suggesting that no model could be found in which the handicap principle could hold (although he did not rule out someone else finding one).

Finally, in 1990, Alan Grafen published two papers in which he established the population genetic and game theoretic foundations to the handicap principle. Mathematically, it could work. It convinced people such as Dawkins that the handicap principle could be right. … While Grafen’s papers are quite technical, the following diagram by Rufus Johnstone provides a simple illustration of how it works – and how similar it is to the work of Michal Spence. If two different quality individuals face differential costs and the same benefits (or differential benefits and the same costs), they will signal at different levels, making their signal a reliable indicator of their quality. The high-quality individual maximises costs relative to benefits at shigh, while the low-quality individual maximises their benefits relative to costs at slow.

Johnstone (2005)

I like this example for two reasons. First, a mathematical model effectively settled a dispute in biology. Most biologists now agree the evolution of handicaps as signals is plausible – it is now a question of how prevalent. But second, once the complicated model was developed, a quick intuitive mathematical explanation that is relatively easy to convert back into English followed.