Author: Jason Collins

Economics. Behavioural and data science. PhD economics and evolutionary biology. Blog at jasoncollins.blog

My year

In the day job, for most of this year I was seconded onto the Australian Government’s Financial System Inquiry. The Inquiry was established to provide a broad review of the Australian financial system, looking at system stability, competition, consumer protection, technological change and whether the system was serving the needs of users.

The Inquiry’s final report is now out and available here. It has received a lot of press here – I think my favourite article so far is this one (if you hit the paywall, google “David Murray has gone rogue” and try that link).

Among other things, there are recommendations to increase bank capitalisation, introduce new obligations on financial product issuer and distributors, and to hold a review into the ownership and use of customers’ financial data. But given my role in the Inquiry and the stage the Government is at – it is now seeking public comment – it’s not really appropriate for me to say which recommendations I support.

Possibly the most interesting recommendations are in the retirement income space. Australia has a compulsory superannuation system, where (currently) 9.5 per cent of our income is required to go into retirement savings. But after a lifetime of being forced to save, once we reach what is called the “preservation age”, you can take the money out. You are free to blow it on a holiday, sportcars or pension means-test exempt house, and then receive the pension.

To try to change this behaviour, the Inquiry recommended introduction of a default retirement product, which will have some mix of income flow and longevity insurance (so your money doesn’t run out before you die). It will be an interesting exercise to design a system where that default will be an successful anchor. It will require a lot of tax, pension and other social policy settings to stop people from ignoring that default and taking their lump of cash in another way.

The other big event of the year was the arrival of twin boys. We think they are identical – four months of confusing who is who is the basis for that – but DNA tests are on the way to confirm. And I’ll be keeping one locked in the cupboard for the next five years to prove to the genetic determinists that environment does matter.

We need more complicated mathematical models in economics

I am half way through David Colander and Roland Kupers’s book Complexity and the Art of Public Policy: Solving Society’s Problems from the Bottom Up. Overall, it’s a good book, although the authors are somewhat slow to get to the point and there are plenty of lines that perplex or annoy (Arnold Kling seemed to have a similar reaction).

I’ll review later, but one interesting line in the book is that under a complexity approach, you may need more complicated mathematical models than used in neoclassical economics. This is because the purpose of the models under a complexity approach is different. They write:

A person is walking home late one night and notices an economist searching under a lamppost for his keys. The person stops to help. After searching a while without luck he asks the economist where he lost his keys. The economist points far off into the dark abyss. The person asks, incredulously, “Then why the heck are you searching here?” To which the economist responds—“This is where the light is.”

Critics of economists like this joke because it nicely captures economic theorists’ tendency to be, what critics consider, overly mathematical and technical in their research. Superficially, searching where the light is (letting available analytic technology guide one’s technical research) is clearly a stupid strategy; the obvious place to search is where you lost the keys.

Telling old jokes doesn’t do much, and in this case the joke was a setup for a different punch line. That punch line is that the critic’s lesson taken from the joke is the wrong lesson if the economy is complex. For a complex system, which the social system is, a “searching where the light is” strategy makes good sense. Since the subject matter of social science is highly complex—arguably far more complex than the subject matter of most natural sciences—it is as if the social science policy keys are lost in the equivalent of almost total darkness. The problem is that you have no idea where in the darkness you lost them, so it would be pretty stupid to just go out searching in the dark. The chances of getting totally lost are almost 100 percent. In such a situation, where else but in the light can you reasonably search in a scientific way?

What is stupid, however, is if the scientist thinks he or she is going to find the keys under the lamppost.

The fact that decisions in complex systems are so uncertain and difficult to make does not mean that one should avoid dealing with them mathematically and scientifically. Quite the contrary; it allows for much more complicated mathematical models since the models are used for a different purpose. Returning to our economist joke in the first chapter, they aim not to precisely describe the real world, but to understand the topography of the landscape under the light. The mathematical models are trying to map different types of topography, which may be helpful when searching for the policy keys, but they do not represent the full search for the keys.

The policy answers can be found only by those searching in the dark, which involves dealing with the full complexity of the system. The fact that one is using the models primarily for guidance, rather than for prescriptions, frees one from forcing the models to have direct policy relevance, which, as we will discuss, is a major reason for the problems with existing economic models. Instead one can use higher-level mathematics that is up to the task. In technical terms, instead of using static equilibrium models that can be analytically solved, one is free to use nonlinear, dynamic models that are beyond analytic solution, but upon which computational tools can shed light. As we will discuss in later chapters, the mathematics of complex evolving systems is really hard and still developing. That is why in the past economists and other social scientists have avoided them. It’s also why their policy advice has not been especially useful when the solution required a comprehensive understanding of our complex evolving socioeconomic system.

[T]he criticism coming from complexity scientists was different from that of most heterodox economists. The usual heterodox criticism of standard economics was that it was too mathematical. This was not the criticism here. Complexity scientists were arguing that economics was not mathematical enough—not only was it not mathematical enough, it was using the wrong mathematics. They agreed that if it was to be science, it had to be “under analytical control.” But they were arguing that by using the right mathematics, highly complex systems containing high levels of agent interdependence could come under analytic or computational scrutiny. Complexity scientists argued that economists needed to start exploring nonlinear dynamic models, path-dependent models by using the mathematics and tools of complexity science

They also give a word of caution as to where the science is at:

Every period has its excesses: the current hype about the usefulness of formal models in complexity science holds echoes of the overconfidence in models that one saw from the 1930s onward. The time when models will provide complete answers to social policy questions, if such a time ever will exist, is still far in the future. Complexity models, like all models, are very useful and necessary, but they are not sufficient.

A week of links

Links this week:

  1. The case against early cancer detection. The charts on mammogram and PSA testing effectiveness are just as Gerd Gigerenzer would have us present the statistics.
  2. The case for business experimentation.
  3. Airline inequality.
  4. While arguments continue about the predictive power of genetic testing, entrepreneurs are already using it. HT: Steve Hsu
  5. However, there are still plenty of average ‘gene for’ studies being produced. HT: Tim Frayling

The unrealistic assumptions of biology

Biologists are usually among the first to tell me that economists rely on unrealistic assumptions about human decision making. They laugh at the idea that people are rational optimisers who care only about maximising consumption.

Some of the points are undoubtedly correct. Humans do not care primarily about consumption. They seek mates or other objectives related to their fitness. And of course, humans do not solve complex optimisation problems with constraints in their heads.

But, as most economists will tell you, the assumptions of rationality and consumption maximisation are mechanisms to derive general predictions about behaviour. And the funny thing is, biologists often do the same. Biologists tend to treat their subjects as optimisers.

That places biologists in a similar position to economists. Biologists may be able to predict or explain behaviour, but often they have not actually explained how their subjects make decisions. If they were to attempt to predict how their subjects would behave in a changed environment – which is the type of predictive task many economists attempt to do – they would likely fail as their understanding of the decision making process is limited.

In Rationality for Mortals: How People Cope with Uncertainty, Gerd Gigerenzer has a great chapter considering how biologists treat decision making, and in particular, to what extent biologists consider that animals use simple decision-making tools such as heuristics (I’ve written two other posts on parts of the book here and here). Gigerenzer provides a few examples where biologists have examined heuristics, but much of the chapter asks whether biologists are missing something with their typical approach.

As a start, Gigerenzer notes that biologists are seeking to make predictions rather than accurate descriptions of decision making. However, Gigerenzer questions whether this “gambit” is successful.

Behavioral ecologists do believe that animals are using simple rules of thumb that achieve only an approximation of the optimal policy, but most often rules of thumb are not their interest. Nevertheless, it could be that the limitations of such rules of thumb would often constrain behavior enough to interfere with the fit with predictions. The optimality modeler’s gambit is that evolved rules of thumb can mimic optimal behavior well enough not to disrupt the fit by much, so that they can be left as a black box. It turns out that the power of natural selection is such that the gambit usually works to the level of accuracy that satisfies behavioral ecologists. Given that their models are often deliberately schematic, behavioral ecologists are usually satisfied that they understand the selective value of a behavior if they successfully predict merely the rough qualitative form of the policy or of the resultant patterns of behavior.

You could write the same paragraph about economists, minus the statement about natural selection. That said, if you were to give the people in an economic model objectives shaped by evolution, even that statement might hold.

But Gigerenzer has another issue with the optimisation approach in biology. As from most analysis of human decision making, “missing from biology is the idea that simple heuristics may be superior to more complex methods, not just a necessary evil because of the simplicity of animal nervous systems.” Gigerenzer writes:

There are a number of situations where the optimal solution to a real-world problem cannot be determined. One problem is computational intractability, such as the notorious traveling salesman problem (Lawler et al., 1985). Another problem is if there are multiple criteria to optimize and we do not know the appropriate way to convert them into a common currency (such as fitness). Thirdly, in many real-world problems it is impossible to put probabilities on the various possible outcomes or even to recognize what all those outcomes might be. Think about optimizing the choice of a partner who will bear you many children; it is uncertain what partners are available, whether each one would be faithful, how long each will live, etc. This is true about many animal decisions too, of course, and biologists do not imagine their animals even attempting such optimality calculations.

Instead the behavioral ecologist’s solution is to find optima in deliberately simplified model environments. We note that this introduces much scope for misunderstanding, inconsistency, and loose thinking over whether “optimal policy” refers to a claim of optimality in the real world or just in a model. Calculating the optima even in the simplified model environments may still be beyond the capabilities of an animal, but the hope is that the optimal policy that emerges from the calculations may be generated instead, to a lesser level of accuracy, by a rule that is simple enough for an animal to follow. The animal might be hardwired with such a rule following its evolution through natural selection, or the animal might learn it through trial and error. There remains an interesting logical gap in the procedure: There is no guarantee that optimal solutions to simplified model environments will be good solutions to the original complex environments. The biologist might reply that often this does turn out to be the case; otherwise natural selection would not have allowed the good fit between the predictions and observations. Success with this approach undoubtedly depends on the modeler’s skill in simplifying the environment in a way that fairly represents the information available to the animal.

Again, Gigerenzer could equally be writing about economics. I think we should be thankful, however, that biologists don’t take their results and develop policy prescriptions on how to get the animals to behave in ways we believe they should.

One interesting question Gigerenzer asks is whether humans and animals use similar heuristics. Consideration of this question might uncover evidence of the parallel evolution of heuristics in other lineages facing similar environmental structures, or even indicate a common evolutionary history. This could form part of the evidence as to whether these human heuristics are evolved adaptations.

But are animals more likely to use heuristics than humans? Gigerenzer suggests the answer is not clear:

It is tempting to propose that since other animals have simpler brains than humans they are more likely to use simple heuristics. But a contrary argument is that humans are much more generalist than most animals and that animals may be able to devote more cognitive resources to tasks of particular importance. For instance, the memory capabilities of small food-storing birds seem astounding by the standards of how we expect ourselves to perform at the same task. Some better-examined biological examples suggest unexpected complexity. For instance, pigeons seem able to use a surprising diversity of methods to navigate, especially considering that they are not long-distance migrants. The greater specialism of other animals may also mean that the environments they deal with are more predictable and thus that the robustness of simple heuristics may not be such as advantage.

Another interesting question is whether animals are also predisposed to the “biases” of humans. Is it possible that “animals in their natural environments do not commit various fallacies because they do not need to generalize their rules of thumb to novel circumstances.” The equivalent for humans is mismatch theory, which proposes that a lot of modern behaviour (and likely the “biases” we exhibit) is due to a mismatch between the environment in which our decision making tools evolved and the environments we exercise them in today.

Finally, last year I wrote about why economics is not more “evolutionary”. Part of the answer there reflects a similar pattern to the above – biologists aren’t that evolutionary either.

The power of heuristics

Gerd Gigerenzer is a strong advocate of the idea that simple heuristics can make us smart. We don’t need complex models of the world to make good decisions.

The classic example is the gaze heuristic. Rather than solving a complex equation to catch a ball, which requires us to know the ball’s speed and trajectory and the effect of the wind, a catcher can simply run to keep the ball at a constant angle in the air, leading them to the point where it will land.

Gigerenzer’s faith in heuristics is often taken to be based on the idea that people have limited processing capacity and are unable to solve the complex optimisation problems that would be needed in the absence of these rules. However, as Gigerenzer points out in Rationality for Mortals: How People Cope with Uncertainty, this is perhaps the weakest argument for heuristics:

[W]e will start off by mentioning the weakest reason. With simple heuristics we can be more confident that our brains are capable of performing the necessary calculations. The weakness of this argument is that it is hard to judge what complexity of calculation or memory a brain might achieve. At the lower levels of processing, some human capabilities apparently involve calculations that seem surprisingly difficult (e.g., Bayesian estimation in a sensorimotor context: Körding & Wolpert, 2004). So if we can perform these calculations at that level in the hierarchy (abilities), why should we not be able to evolve similar complex strategies to replace simple heuristics?

Rather, the advantage of heuristics lies in their low information requirements, their speed and, importantly, their accuracy:

One answer is that simple heuristics often need access to less information (i.e. they are frugal) and can thus make a decision faster, at least if information search is external. Another answer – and a more important argument for simple heuristics – is the high accuracy they exhibit in our simulations. This accuracy may be because of, not just in spite of, their simplicity. In particular, because they have few parameters they avoid overfitting data in a learning sample and, consequently, generalize better across other samples. The extra parameters of more complex models often fit the noise rather than the signal. Of course, we are not saying that all simple heuristics are good; only some simple heuristics will perform well in any given environment.

As the last sentence indicates, Gigerenzer is careful not to make any claims that heuristics generally outperform. A statement that a heuristic is “good” is ill-conceived without considering the environment in which it will be used. This is the major departure of Gigerenzer’s ecological rationality from the standard approach in the behavioural sciences, where the failure of a heuristic to perform in an environment is taken as evidence of bias or irrationality.

Once you have noted what heuristic is being used in what environment, you can have more predictive power than in a well-solved optimisation model. For example. an optimisation model to catch a ball will simply predict that the catcher will be at the place and time where the ball lands. Once you understand that they use the gaze heuristic to catch the ball, you can also predict the path that they will take to get to the ball – including that they won’t simply run in a straight line to catch it. If a baseball or cricket coach took the optimisation model too seriously, they would tell the catcher that they are running inefficiently by not going straight to where it will land. Instructions telling them to run is a straight line will likely make their performance worse.

Four perspectives on human decision making

I have been rereading Gerd Gigerenzer’s collection of essays Rationality for Mortals: How People Cope with Uncertainty. It covers most of Gigerenzer’s typical turf – ecological rationality, heuristics that make us smart, understanding risk and so on.

In the first essay, Gigerenzer provides four categories of approaches to analysing decision making – unbounded rationality, optimisation under constraints, cognitive illusions (heuristics and biases) and ecological rationality. At the end of this post, I’ll propose a fifth.

1. Unbounded rationality

Unbounded rationality is the territory of neoclassical economics. Omniscient and omnipotent people optimise. They are omniscient in that they can see the future – or at least live in a world of risk where they can assign probabilities. They are omnipotent in that they have all the calculating power they need to make perfect decisions. And with that foresight and power, they make optimal decisions.

Possibly the most important point about this model is that it is not designed to describe precisely how people make decisions, but rather to predict behaviour. And in many dimensions, it does quite well.

2. Optimisation under constraints

In this approach, people are no longer omniscient. They need to search for information. As Gigerenzer points out, however, this attempt to inject realism creates another problem. Optimisation with constraints can be even harder to solve than optimisation with unbounded rationality. As a result, the cognitive power required is even greater.

Gigerenzer is adamant that optimisation under constraints is not bounded rationality – and if we use Herbert Simon’s definition of the term, I would agree – but analysis of this type commonly attracts the “boundedly rational” label. Gigerenzer’s does not want the unrealistic nature of optimisation under constraints to tar the concept of bounded rationality.

3. Cognitive illusions – logical irrationality

The next category is the approach in much of  the behavioural sciences and behavioural economics. It is often labelled as the “heuristics and biases” program. This program looks to understand the processes under which people make judgements, and in many cases, seeks to show errors of judgment or cognitive illusions. This program has generated a long list of biases – just look at the Wikipedia page for a taste.

Gigerenzer picks two main shortcomings of this approach. First, although the program successfully shows failures of logic, it does not look at the underlying norms. Second, it tends not to produce testable theories of heuristics. As Gigerenzer states, “mere verbal labels for heuristics can be used post hoc to “explain” almost everything.”

An example is analysis of overconfidence bias. People are asked a question such as “Which city is farther north – New York or Rome?”, and asked to give their confidence that their answer is correct. When participants are 100 per cent certain of the answer, less than 100 per cent tend to be correct. That pattern of apparent overconfidence continues through lower probabilities.

There are several critiques of this analysis, but one of the common suggestions is that people are presented with questions that are unrepresentative of a typical sample. People typically use alternative cues to answer a question such as the above. In the case of latitude, temperature is a plausible cue. The overconfidence bias occurs because the selected cities are a biased sample where the cue fails more often than expected. If the cities are randomly sampled from the real world, the overconfidence disappears. The net result is that what appears to be a bias may be better explained by the nature of the environment in which the decision is made.

4. Ecological rationality

Ecological rationality departs from the heuristics and biases program by examining the relationship between mind and environment, rather than the mind and logic. Human behaviour is shaped by scissors with two blades – the cognitive capabilities of the actor, and the environment. You cannot understand human behaviour without understanding both the capabilities of the decision maker and the environment in which those capabilities are exercised. Gigerenzer would apply the bounded rationality label to this work.

On this basis, there are three goals to the ecological rationality program. The first is to understand the adaptive toolbox – the heuristics of the decision maker and their building blocks. The second is to understand the environmental structures in which different heuristics are successful. The third is to use this analysis to improve decision making through designing better heuristics or changing the environment. This can only be done once you understand the adaptive toolbox and the environments in which different tools are successful.

Gigerenzer provides a neat example of how the ecological rationality departs from the heuristics and biases program in its analysis of a problem – in this case, optimal asset allocation. Harry Markowitz, who received a Nobel Memorial Prize in Economics for his work on optimal asset allocation, did not use the results of his analysis in his own investing. Instead, he invested his money using the 1/N rule – spread your assets equally across N assets.

The heuristics and biases program might look at this behaviour and note Markowitz is not following the optimal behaviour determined by himself. He is making important decisions without using all the available information. Perhaps it is due to cognitive limitations?

As Gigerenzer notes, optimisation is not always the best solution. Where the problem is computationally intractable or the optimisation solution lacks robustness due to estimation errors, heuristics may outperform. In the case of asset allocation, Gigerenzer notes work showing that 500 years of data would have been required for Markowitz’s optimisation rule to outperform his practice of 1/N. In a world of uncertainty, it can be beneficial to leave information on the table. Markowitz was using a simple heuristic for an important decision, but rightfully so as it is superior for the environment in which he is making the decision.

5. Evolutionary rationality

Gigerenzer proposes four categories, but I’ll lay out a fifth (I’m not sure about the label I’ve just given it). Evolutionary rationality develops a deeper understanding of the cognitive capabilities of the decision maker through an analysis of the adaptive basis of traits. This perspective could inform all four of the above categories of decision making. It could be used to assess what is being optimised, what the constraints might be, how biases might be due to mismatch between past and present environments, and what the heuristics are.

Gigerenzer notes the possibility of going into this territory, but deliberately holds back. In the third chapter of the book, he writes:

[H]uman psychologists are not able to utilize many of the lines of evidence that biologists apply to justify that a trait is adaptive. We can make only informed guesses about the environment in which the novel features of human brains evolved, and because most of us grow up in an environment very different to this, the cognitive traits we exhibit might not even have been expressed when our brains were evolving. …

ABC avoids the difficult issue of demonstrating adaptation in humans by defining ecological rationality as the performance, in terms of a given currency, of a given heuristic in a given environment. We emphasize that currency and environment have to be specified before the ecological rationality of a heuristic can be determined; thus, take-the-best is more ecologically rational (both more accurate and frugal) than tallying in noncompensatory environments but not more accurate in compensatory ones. Unlike claiming that a heuristic is an adaptation, a claim that it is ecologically rational deliberately omits any implication that this is why the trait originally evolved, or has current value to the organism, or that either heuristic or environment occurs for real in the present or past. Ecological rationality might then be useful as a term indicating a more attainable intermediate step on the path to a demonstration of adaptation.

There is a lot more interesting material in Chapter 3 on the link between Gigerenzer’s program and the approach taken by biologists. That will be the subject of a later post.

A week of links

Links this week:

  1. Big ideas are destroying international development. Dream smaller.
  2. Appealing to my biases – the skeptics guide to institutions Part 1 and Part 2.
  3. Most published results in finance are false.
  4. Be mean, look smarter.
  5. Constructing illusions.
  6. Predicting complex genotypes from genomic data – for those who confuse these two statements:

“The brain is complex and nonlinear and many genes interact in its construction and operation.”

“Differences in brain performance between two individuals of the same species must be due to nonlinear (non-additive) effects of genes.”

Genetics and education policy

Philip Ball has an article in the December issue of Prospect (ungated on his blog) arguing that consideration of the genetic basis to social problems is a distraction from socioeconomic causes. The strawman punchline for the Prospect article is “It’s delusional to believe that everything can be explained by genetics”.

The article has drawn a response from one of the people named in the article, Dominic Cummings. Ball suggests that Cummings presents “genetics as a fait accompli – if you don’t have the right genes, nothing much will help”, although this statement suggests Ball had not invested much effort getting across Cummings’s actual position (as contained in this now infamous essay). Ball responded in turn, with Cummings firing back (in an update at the bottom of the page), and Ball responding again.

Beyond the tit for tat – read their respective posts for that – there are some interesting points about whether genetics tells us anything about education policy.

As a start, Ball claims that “Social class remains the strongest predictor of educational achievement in the UK”, referencing this article. However, the authors of that article don’t consider the role of genetics or other potential predictors. The references that article gives for the claim are similarly devoid of relevant comparisons, which is unsurprising as they largely comprise policy positioning documents from various organisations. It’s hard to credibly claim something is a superior predictor when it is not assessed against the alternatives.

So, what is the evidence on this point? For one, we have twin and adoption studies. As a sample, Bruce Sacerdote studied Korean adoptees into the United States (admittedly, not the UK as per the quote) and found that shared environment (which would include socioeconomic status) explained 16 per cent of the variation in educational attainment. Genetic factors explained 44 per cent. This is a consistent finding in adoption studies, with children more closely resembling their biological parents than their adopted parents. For twin studies, an Australian analysis found a 57 per cent genetic and 24 per cent shared environment contribution to variation in education. A meta-analysis of heritability estimates of educational attainment found that, in the majority of samples, genetic variation explained more of the variation in educational attainment than shared environment.

Of course, we don’t have the genetic data or understanding at hand just yet, but there are other factors such as IQ that are better predictors of education than social class. This territory is also complicated – there are genetic effects on both IQ and social class – but IQ tends to outperform. This meta-analysis shows that IQ is a better predictor of education, income and occupation than socioeconomic status – not overwhelmingly so, but superior nonetheless.

Then there is the link between genetic factors and socioeconomic status, with a long line of studies finding a relationship. One of the more recent was by Daniel Benjamin and friends (ungated pdf). They found heritability of permanent income (20-year average) of 0.58 for men and 0.46 for women. Part of the predictive power of socioeconomic status comes from its genetic basis. Gregory Clark’s hypothesis of low social mobility being a result of genetic factors reflects this body of work.

Turning next to Ball’s pessimism of the future of genetics, he states:

In September an international consortium led by Daniel Benjamin of Cornell University in New York reported on a search for genes linked to cognitive ability using a new statistical method that overcomes the weaknesses of traditional surveys. The method cross-checks such putative associations against a “proxy phenotype” – a trait that can ‘stand in’ for the one being probed. In this case the proxy for cognitive performance was the number of years that the tens of thousands of test subjects spent in education.

From several intelligence-linked genes claimed in previous work, only three survived this scrutiny. More to the point, those three were able to account for only a tiny fraction of the inheritable differences in IQ. Someone blessed with two copies of all three of the “favourable” gene variants could expect a boost of just 1.8 IQ points relative to someone with none of these variants. As the authors themselves admitted, the three gene variants are “not useful for predicting any particular individual’s performance because the effect sizes are far too small”.

This, however, is only part of the picture. If we look at another study in which Benjamin was involved, three SNPs (single nucleotide polymorphisms – single base changes in the DNA code) were found to affect educational attainment. In total, they explained 0.02 per cent of the variation in educational attainment – practically nothing. But combine all the SNPs in the 100,000 person sample, and you edge up to 2.5 per cent. But even more interesting, they calculated that with a large enough sample they could explain over 20 per cent of the variation. Co-author Philipp Koellinger explains this in a video I recently linked. Although this study found variants with low explanatory power, it also points to the potential to explain much more with larger samples.

For more on the background to the feasibility of identifying the causal genetic variants for traits such as IQ, its worth looking at this paper by Steve Hsu. Possibly the most important point is that the causal variants for traits such as cognitive ability and height are additive in their effect. In his final response, Ball states that “And that might be because we are thinking the wrong way – too linearly – about how many if not most genes actually operate.” But the evidence shows that is how they largely work. Although a few years old now, this paper’s theoretical and empirical argument that genetic effects are largely additive has generally been affirmed in later research. This considerably simplifies the task of predicting outcomes based on someone’s genome. In fact, this is one reason selective breeding has been so successful and genetic data is already being used successfully in cattle breeding (There’s an example of the gap between entrepreneurship and policy development – while some of us are arguing whether this stuff is possible, someone else is already doing it).

Now, supposing you have this genetic data, how might this change education? Returning to the article I linked above (ungated pdf), Benjamin and friends suggested this genetic information could be used to better target interventions. They propose early identification of dyslexia as an example.

They also suggest using genetic data as controls. This could provide more precision in studies of whether interventions to target socioeconomically disadvantaged children are effective. The genetic controls allow you to hone in on what you are interested in. In the question and answer session of a video of talk by Jason Fletcher I recently linked, Benjamin pointed to the famous Perry PreSchool Project and noted that additional precision through the use of genetic data would have been of great value.

Ball also indirectly alludes to another reason to learn about genetic factors. In his last response, he writes:

Personally, I find a little chilling the idea that we might try to predict children’s attainments by reading their genome, and gear their education accordingly – not least because we know (and Plomin would of course acknowledge this) that genes operate in conjunction with their environment, and so whatever genetic hand you have been dealt, its outcomes are contingent on experience.

This argument runs both ways. Supposing there are large gene-environment interactions, how can you understand the effects of changing the environment without looking at the way that environment affects people via their genome? As an example of this, Jason Fletcher examined how variation in a gene changed the response to tobacco taxation policy (he talks about this in a video I recently linked). Those with a certain allele responded to taxation and reduced smoking. Others didn’t. Too be honest, I’m not sold on the results of this particular study, but it illustrates that genetic factors that need to be considered if these gene-environment interactions are as large as people such as Ball believe.

[I should admit at this point that G is for Genes: The Impact of Genetics on Education and Achievement is sitting unread in my reading pile….]

Putting it together, Ball is off track in his suggestion that learning about and targeting genetic factors distracts from dealing with socioeconomic issues. Understanding of genetic and socioeconomic factors are complements, and by disentangling their effects, we could better tailor education to address each.

That is not to say that the genetic enterprise is guaranteed to be successful. But there is plenty of evidence that our genes are relevant and, on that basis, should be considered.

Further, there are changes we can make today. Ball asks what genetics can add beyond recognition that some children are more talented than others. The thing is, much schooling is still structured as though we are blank slates. Maybe it is an understanding of genetics that will finally get us to a point where education is better designed for people with different capacities, improving the experience across the full range of abilities and backgrounds.

The beauty of self interest

In my review of E.O. Wilson’s The Social Conquest of Earth, I quoted this passage which captures Wilson’s conception of the origin of cooperation in humans.

Selection at the individual level tends to create competitiveness and selfish behaviour among group members – in status, mating, and the securing of resources. In opposition, selection between groups tends to create selfless behavior, expressed in greater generosity and altruism, which in turn promote stronger cohesion and strength of the group as a whole.

This passage from Matt Ridley strikes at the heart of Wilson’s dichotomy between selfishness and generosity:

“Group selection” has always been portrayed as a more politically correct idea, implying that there is an evolutionary tendency to general altruism in people. Gene selection has generally seemed to be more of a right-wing idea, in which individuals are at the mercy of the harsh calculus of the genes.

Actually, this folk understanding is about as misleading as it can be. Society is not built on one-sided altruism but on mutually beneficial co-operation.

Nearly all the kind things people do in the world are done in the name of enlightened self-interest. Think of the people who sold you coffee, drove your train, even wrote your newspaper today. They were paid to do so but they did things for you (and you for them). Likewise, gene selection clearly drives the evolution of a co-operative instinct in the human breast, and not just towards close kin.

You can read the full article here.