Irrational Strategies: Dealing with an Altruistic Prisoner

So you are presented with the following prisoner’s dilemma game. What is your choice? Most economics freshmen will have learnt that when presented with the choices of cooperation or finking, finking is the dominant strategy, and an all-fink nightmare is the only pure strategy Nash equilibrium. Against homo economicus, the cold and rational decision maker, your best bet would certainly be with finking. But against the average Joe, would you be able to assume rationality? Does the decision to cooperate necessarily imply irrationality on his behalf?
One of the most basic suggestions to explain ‘irrational’ cooperation in prisoner’s dilemma games is that of altruism. Where an individual is utilitarian and cares for social benefit beyond their own explicit payoffs, it is possible for the player to behave differently. Consider the same game below, with the difference that player one is now wholly altruistic, and receives a utility equal to the sum of explicit payoffs from the game above. In this scenario, the individual will be willing to cooperate, as the social welfare is higher overall by choosing so. To the extent that the player genuinely exhibits altruism in their utility, it is possible for cooperation to be a rational choice.
Returning to the self-centred player, Kreps et al.[1] examined the repeated prisoner’s dilemma. They considered a Bayesian game (i.e. one with information asymmetry and imperfect information), where players were unsure if the other was rational. Tentatively, to build a reputation of being altruistic, players would operate under the guise of an irrational ‘tit-for-tat’ strategy, cooperating until getting finked, where the victim would then fink in return for the subsequent period. In this delicate setup, Kreps et al. asserts that players would want to cooperate up until a critical point (realising that the ‘altruism’ was a façade) when they defect and fink on one another for the remainder of the game. To this extent, irrationality in itself is treated as a strategy by truly rational players as a means of maximising their payoffs through time.
This meta-usage of irrationality may seem difficult to apply to our average Joe, but Andreoni and Miller[2] support this idea. In their experiments, they found that reputation building for altruism improved the level of cooperation in repeated prisoner’s dilemma games. Additionally, in line with Kreps et al., cooperation tended to drop in the final repetitions of the game, where the continued assistance of the other player was outweighed by the generous payoff of finking first. Their distinction however, was that they observed some subset of the population behaving genuinely altruistically, cooperating much more than expected. It is these individuals that are believed to receive extra utility from helping out, and suggests that their reputation building is not to deceive the others, but to improve the outcome overall.
The assumption of rational agents is one that is frequently relied upon. When loosened, it opens up a world of complexity into game theory analysis. It is this added depth that provides a much more interesting insight into the other factors we consider when strategizing. Homo economicus may be a paragon of rationality, but when presented with the prisoner’s dilemma, a pinch of irrationality might just be beneficial.
[1] Kreps, David, Paul Milgrom, John Roberts and Robert Wilson. “Rational Cooperation in the Finitely Repeated Prisoners’ Dilemma.” Journal of Economic Theory 27 no. 2 (1982): 245-252
[2] Andreoni, James and John Miller. “Rational Cooperation in the Finitely Repeated Prisoner’s Dilemma: Experimental Evidence.” The Economic Journal 103 418 (1993): 570-585