Humans are inherently risk averse. When offered a coin toss with a reward of $10,000 for heads but a loss of $10,000 for tails, most people would decline. They would likely agree to pay a significant sum to avoid the gamble, despite the expected value of the gamble being zero.

When economists describe the preferences of a person, they often build in some form of risk aversion. A risk averse person will always prefer a sum with certainty than a gamble with that expected value. One way that economists do this is by describing the preferences of a person as logarithmic. This means that the level of utility that an individual gets from, say, a sum of money increases at a diminishing rate. They might value an increase in their wealth from $1 to $10 the same as an increase from $10 to $100. These preferences then shape the choices that the person makes. For example, they might value of 50-50 gamble between $1 and $100 at only $10, despite the expected value of the gamble being slightly above $50.

Beyond the extensive research on preferences in the behavioural economics literature, the use of logarithmic preferences to approximate decision-making has some support in empirical work on how we view numbers in our minds. In 2008, Stanislas Dehaene and colleagues made a useful contribution to this area through their examination of whether our mental mapping of numbers was inherent or trained.

First, some background. As noted by the authors, there have been a number of experiments that showed that children mapped numbers to space in a logarithmic fashion. They noted that:

When asked to point toward the correct location for a spoken number word onto a line segment labeled with 0 at left and 100 at right, even kindergarteners understand the task and behave nonrandomly, systematically placing smaller numbers at left and larger numbers at right. They do not distribute the numbers evenly, however, and instead devote more space to small numbers, imposing a compressed logarithmic mapping. For instance, they might place number 10 near the middle of the 0-to-100 segment.

This logarithmic view of the world does not last. Between first and fourth grade children start to move from logarithmic to a more linear mapping of numbers. This transition is most pronounced across small numbers, and moves to higher numbers as they age. Some logarithmic mapping persists for very large numbers.

Dehaene and colleagues contributed to this picture through their examination of whether the move from logarithmic to linear mapping was a result of formal schooling or a natural process of brain maturation. To study this, they undertook some number-mapping exercises with the Mundurucu, an Amazonian group with little access to education or other instruments that may affect their perception of numbers (such as maps and rulers).

To test how they map numbers, the participants were given a line with one dot at one end and 10 or 100 dots at the other. They were then given a number of dots between either one and 10 or one and 100 and were asked to place them at the proper place on the line.

This exercise showed that logarithmic mapping persisted into adulthood for the Mundurucu. This was even the case for numbers between one and ten. These findings suggest that it is the experience of children who receive formal education in mathematics that shifts their mental mapping to a linear way. This could be through either through the formal education itself or some other cultural cause.

This study raises a number of implications for the way people’s risk preferences are formed. If people perceive quantities in a logarithmic fashion, they will tend to be risk averse. As they move to a more linear way of mapping numbers, this could coincide with a reduction in risk aversion. Does the education of children in mathematics tend to increase their risk tolerance through changing the way they see numbers?

This could be argued to have a number of follow-on effects. A reduction in risk aversion would naturally see an increase in risk taking activity. They will weight the possibility of great wealth higher and be willing to accept the risk of a larger loss to make it. An economy full of people with a greater risk tolerance could have more entrepreneurial activity, greater wealth (with some unlucky losers) and a larger tendency to chase wealth.

From a historical perspective, the questions become more interesting. Could the increasing degree of education over the last few hundred years have been training children to be more risk seeking in their activities? If so, could we argue that an environmental cue is part of the reason modern economies look the way they do?

An additional implication from the study is the manner in which large numbers are dealt with. From a logarithmic scale, large numbers appear closer together. If functioning with large numbers in an accurate fashion is important (for example, if it matters whether I pay you $100 or $110), the shift to a linear way of thinking will reap some important dividends. This is not so much a question of risk aversion but the ability to differentiate when numbers get large.

Although I’ve mentioned the log-number-sense as a general principal to use for explaining certain irrationalities in a previous comment, I don’t feel like that is the justification most economists use for log-utilities. What is the standard justification? Is it just an artifact of Daniel Bernoulli using log-returns to resolve the standard presentation of the St. Petersburg paradox and introduce the concept of risk aversion? It seems like a weird artifact to keep around, given that the St. Petersburg paradox is trivial to modify to be paradoxical again for any unbounded utility function. I’ve always thought that a bounded function like a sigmoid or arctan would make more sense, do you know why these aren’t used? Or are they used and I am just not reading deeply enough?

For a lot of much micro, there is no functional form specified – the analysis is done on the basis of the axioms of completeness and transitivity, and usually assumptions of continuity and convexity. Log utilities are used when a functional form is required as they satisfy those assumptions and they’re tractable, although they are a long way from being used exclusively. It’s not so much a historical artifact as a useful tool.

I have seen arctan utility functions before, but they’re not as easy to use. For most analysis, changing between logarithmic and arctan utility functions doesn’t make much difference, although I haven’t seen any full explorations of this (not saying that this exploration doesn’t exist).