In this year’s Slate Star Codex annual survey Scott Alexander tabled a prisoner’s dilemma exercise with a difference: there was real money at stake. Two of the several thousand people who take the survey would be randomly selected to ‘play’ the game, using the answers they had submitted, with the outcome being that they would win between $100 and $1000.
Scott, in fact, set three different, closely related prisoner’s dilemma type exercises, presumably to see whether results differed. They were:
Game I: Growing The Commons
Choose either “cooperate” or “defect”. I will randomly select one player to win the prize, but defectors will be twice as likely to win as cooperators. The prize in dollars will be equal to the percent of people who chose cooperate, times 10 (ie if 50% of people cooperate, the prize will be $500).
Game II: Prisoner’s Dilemma
Choose either “cooperate” or “defect”. I will randomly select two people to play the game. If they both cooperate, they will both get $500. If one person cooperates and the other defects, the defector will get $1000. If they both defect, they will both get $100.
Game III: Prisoner’s Dilemma Against Your Clone
Choose either “cooperate” or “defect”. I will randomly select one person to play the game, and nonrandomly select their competitor by choosing the single other person who is most similar to them, ie the person whose survey answers are closest to their own (not including their answers to Part 23). If they both cooperate, they will both get $500. If one person cooperates and the other defects, the defector will get $1000. If they both defect, they will both get $100.
The third game taps into Douglas Hofstadter’s concept of ‘superrationality’, a concept which attempts to argue that for certain types of players (for example, the sort of sentient species that manage to avoid annihilating themselves via nuclear mutually assured destruction) the ‘superrational’ action to choose is the one that maximises the outcome for yourself assuming that the other player is also ‘superrational’ and therefore will make the same choice as you.
When I took the survey, I thought about what to answer for a while. Obviously, as in any non-iterated prisoner’s dilemma, the strictly self-maximising answer is to defect: whatever the other player does, you come off better, and you have no control over what the other player does (this is self-evidently true in an anonymous internet survey, and applies equally to Game III: you are not really playing against ‘your clone’ and what you choose has no impact on what they choose). Is this morally correct though? In other words, despite the contrived and entirely artificial nature of the game, is cooperating in a prisoner’s dilemma an absolute rule, a form of the categorical imperative? Or should this rightly be looked at as a game, where to not defect would be as foolish as refusing to use the robber in Settlers of Catan, on the grounds that ‘stealing is wrong’?
After some consideration I decided that it was a game and therefore defected in all three cases. Both I and every other player were entering into this purely of our own free will (the questions were the last in the survey and could be omitted, so there was not even any tacit coercion or incentive), and no player had any entitlement whatsoever to the money – it was purely free potential cash in a minimal-effort game. No negative consequence, beyond getting less free money, would occur to any player as a result of a defection. Finally, in terms of Scott himself, firstly he had chosen to put up this price entirely of his own volition as a research project and, secondly, defection would tend to reduce the amount of money he gave away, giving no added weight to cooperation. Despite the instinct to cooperate, the decision was not a moral one, so defection was permissible and correct.
Interestingly, things I would have seen as clearly immoral would have been attempts to subvert the game (and therefore the research project) to increase one’s chance of winning, such as making multiple entries under different IP addresses. Indeed, so far outside the ‘moral choice space’ are such things that I doubt I’d have even thought of them had I not been thinking of the decision in moral terms. Needless to say I didn’t do any such actions.
Of course, there is one further factor, arising from the fact that this was not a classic dilemma. In reality, the chance of actually playing the game was less than 1 in a thousand, meaning the expectation value of any choice was relatively low. If one felt one should cooperate, and the certain warm fuzziness that would accrue from doing so outweighed the expectation value of about $1, then the self-maximising thing to do is cooperate. This is entirely rational, in the same way it is rational to play the lottery, if the enjoyment one gains from daydreaming of being a millionaire outweighs the loss of a £1 a week – feelings matter, and should be taken into account in such decisions.
What did other people do?
These questions were the results I was most looking forward to, but I was quite surprised by the results. It is worth noting that it’s likely that most of those taking an SSC survey would be familiar with the prisoner’s dilemma and a goodly number, if not the majority, would have come across the concept of superrationality.
In the event, 69.9% of people cooperated in Game I, 71.8% in Game II and 84.8% in Game II.
The results for Game I and Game II were very similar, with more people cooperating in Game III. That in itself didn’t surprise me – I thought more people would cooperate in Game III as superrationality is an appealing concept, even if I do consider that, at least in this situation, it is a mistake to treat Game III any differently from Game II, meaning that one should logically either cooperate on both or defect on both (I know at least two people who feel differently to me on this one though, so welcome debate!).
What did surprise me was the high proportion of people who cooperated. I had expected, perhaps because of my stereotype of the sort of people who read the website (the readership skews towards those physical sciences/tech background; furthermore Scott considers himself part of the ‘rationalist’ community and many people from that community read it, though equally there are lots of people not from that community, such as myself, who also do), that most people – or at least half – would take the ‘rational’ self-maximising approach in a game situation such as this. It turns out that wasn’t the case, whether out of a sense of moral obligation, adherence to superrationality or self-maximising warm fuzzies. The rate of cooperation seems to be very similar to the rate of cooperation in other experiments carried out on the general public (see e..g. this paper published last year by the Royal Society), which is quite interesting, given the large differences from the general population in other areas, including ones that one might expect to impact in this circumstance.
Looks like there’s a missing link for the royal society paper?
Thanks – fixed.
Did you ever have the pleasure of watching Jasper Carrot’s “Golden Balls”?
The final round was a Prisoner’s Dilemma – deciding to split the jackpot equally or steal it (if both steal there was no jackpot) while persuading your co-competitor that you will split the jackpot.
The penalty for stealing was that it was on television so you would publicly affirm yourself as a shameless liar, at least in this particular situation.
Some interesting findings:
https://pubsonline.informs.org/doi/abs/10.1287/mnsc.1110.1413
And some very interesting tactics:
Very interesting – thank you!