The Paradox of Trust
In a chapter of Robert Sugden’s The Community of Advantage: A Behavioural Economist’s Defence of the Market, he makes some interesting arguments about how we should interpret the results of the trust game.
First, what is the trust game:
The ‘Trust Game’ was first investigated experimentally by Joyce Berg, John Dickhaut, and Kevin McCabe (1995). … In Berg et al.’ s game, two players (A and B) are in separate rooms and never know one another’s identity. Each player is given $10 in one-dollar bills as a ‘show up fee’. A puts any number of these bills, from zero to ten, in an envelope which will be sent to B; he keeps the rest of the money for himself. The experimenter supplements this transfer so that B receives three times what A chose to send. B then puts any number of the bills she has received into another envelope, which is returned to A; she keeps the rest of the money for herself. The game is played once only, and the experiment is set up so that no one (including the experimenter) can know what any other identifiable person chooses to do. The game is interesting to theorists of rational choice because it provides the two players with an opportunity for mutual gain, but if the players are rational and self-interested, and if each knows that this is true of the other, no money will be transferred. (It is rational for B to keep everything she is sent; knowing this, it is rational for A to send nothing.)
There is a sizeable body of empirical evidence that player A often does send money and B often returns money. How can this be explained? One option is to draw on the concept of reciprocity.
In this literature, it is a standard modelling strategy to follow Matthew Rabin (1993) in characterizing intentions as kind or unkind. … The greater the degree to which one player benefits the other by forgoing his own payoffs, the kinder he is. Rabin’s hypothesis is that individuals derive utility from their own payoffs, from being kind towards people who are being kind to them, and from being unkind towards people who are being unkind to them.
But if you think this hypothesis through, there is a problem, which Sugden calls the Paradox of Trust.
[I]t seems that any reasonable extension of Rabin’s theory will have the following implication for the Trust Game: It cannot be the case that A plays send, expecting B to play return with probability 1, and that B, knowing that A has played send, plays return. To see why not, suppose that A chooses send, believing that B will choose return with probability 1.
A has not faced any trade-off between his payoffs and B’s, and so has not had the opportunity to display kindness or unkindness.
…
Since Rabin often describes positive reciprocity as ‘rewarding’ kind behaviour (and describes negative reciprocity as ‘punishing’ unkind behaviour), the idea seems to be that B’s choice of return is her way of rewarding A for the goodness of send. But if A’s action was self-interested, it is not clear why it deserves reward.
It may seem paradoxical that, in a theory in which individuals are motivated by reciprocity, two individuals cannot have common knowledge that they will both participate in a practice of trust. Nevertheless, this conclusion reflects the fundamental logic of a modelling strategy in which pro-social motivations are represented as preferences that are acted on by individually rational agents. It is an essential feature of (send, return), understood as a practice of trust, that both players benefit from both players’ adherence to the practice. If A plays his part in the practice, expecting B to play hers, he must believe and intend that his action will lead to an outcome that will in fact benefit both of them. Thus, if pro-sociality is interpreted as kindness—as a willingness to forgo one’s own interests to benefit others—A’s choice of send cannot signal pro-social intentions, and so cannot induce reciprocal kindness from B. I will call this the Paradox of Trust.
Is there an alternative way of seeing this problem? Sugden turns to the idea of mutually beneficial exchange.
The escape route from the Paradox of Trust is to recognize that mutually beneficial cooperation between two individuals is not the same thing as the coincidence of two acts of kindness. When A chooses send in the Trust Game, his intention is not to be kind to B: it is to play his part in a mutually beneficial scheme of cooperation, defined by the joint action (send, return). … If A is completely confident that B will reciprocate, and if that confidence is in fact justified, A’s choice of send is in his own interests, while B’s choice of return is not in hers. Nevertheless, both players can understand their interaction as a mutually beneficial cooperative scheme in which each is playing his or her part.
This interpretation has implications for how we should view market exchange.
Theorists of social preferences sometimes comment on the fact that behaviour in market environments, unlike behaviour in Trust and Public Good Games, does not seem to reveal the preferences for equality, fairness and reciprocity that their models are designed to represent. The explanation usually offered is that people have social preferences in all economic interactions, but the rules of the market are such that individuals with such preferences have no way of bringing about the fair outcomes that they really desire.
…
Could it be that behaviour in markets expresses the same intentions for reciprocity as are expressed in Trust and Public Good Games, but that these intentions are misrepresented in theories of social preference?