Ethics and the Prisoner's Dilemma: Using Game Theory to Understand Morality

TLDR: Ethical social norms solve the prisoner dilemma and other sub-optimal equilibria. This is why evolution has made humans moralistic and why ethics are very important to a well-functioning society. Don't confuse apathy for equanimity. Ultimately, it is not rational to be apathetic.

I've been meaning to write about my thoughts on ethics for some time now, but this past election has compelled me to sit down and finally do so. Many "rational thinkers" that I have been following seem to think that ethics are useless and/or meaningless, and that apathy is a more rational approach. You get the impression that apathy is a badge of honour to rational thinking. Nothing could be further from the truth. It's true that equanimity is an important input for rational thinking, but be careful not to confuse apathy for equanimity. A true rational thinker should recognize that ethics are extremely important to rational thinking. I will try to explain how and why.

First, let's talk about the prisoner's dilemma, the classic game from game theory. I'm not going to explain the game in detail—that's what Wikipedia is for—but the situation can be described by the following decision table:

The Nash equilibrium—what I call the "stable outcome"—of the prisoner's dilemma is that both players lose, even though it is entirely possible for them both to win if they had strategically cooperated. (Econ wonks would say that the outcome isn't Pareto efficient.) This is important because it demonstrates a situation where two individuals both behaving selfishly will be worse off than if they both behaved selflessly. This is not what free market ideology says is supposed to happen.

Humans have evolved to follow a heuristic of selfishness—and for good reason: in most cases, behaving selfishly not only makes the individual better off, but everyone else too. Indeed, this is the standard argument in favour of free markets, and it is absolutely correct. But what the prisoner's dilemma is showing is that the selfish heuristic has a bias in some situations which we need to watch out for. Which situations? And how can we fix it?

The solution to the prisoner's dilemma is very simple: all we have to do is convince both players to behave, in a sense, selflessly. And, thankfully, this is exactly what evolution has provided for us through ethics and social norms.

Humans are naturally evolved to adopt social norms—which I like to think of as evolutionary tools for social engineering—and it is these social norms that allow humans to overcome coordination failures. What happens, exactly, to the prisoner's dilemma when "not being selfish" becomes a social norm? The individuals are compelled to not behave selfishly, essentially removing selfishness as a strategic option, and pulling them towards the more optimal outcome where they both stand to win.

Indeed, this is what I think it means to be ethical: a behaviour is ethical if, once adopted by everyone, it solves a situation resembling the prisoner's dilemma. And to get everyone to adopt an ethical behaviour, it must become a social norm. And since there is a huge advantage to adopting ethical social norms, they have been evolutionarily rewarded. I usually refer to ethical social norms as morals.

Now of course, two individuals might be able to overcome the prisoner's dilemma on their own without the need for an ethical social norm if each person recognized their impact on the other, reasonably expected the other to as well, and were thus able to cooperate in some unspoken way (through the use of a Schelling point, for example).  And sometimes, I'm sure, such cooperation occurs. At scale, however, when enough individuals are involved, behaving selfishly will always seem to be unharmful to others and not even Schelling points will save you. This is why ethics are so important: without ethical social norms, a prisoner's dilemma involving millions of people becomes nearly impossible to overcome.

Another insightful example is the ultimatum game. In that game (again, see Wikipedia), what is "rationally" expected is that the second player will accept any non-zero offer from the first player. But that doesn’t happen in real experiments with real people. Instead, the second player rejects offers that it deems unfair. But what is "unfair"? Well, that is driven by the individual's a priori ethics. When it comes to dividing the pie, it seems that we expect people to behave ethically. As we should! Because that is, in the long-run, the rational thing to do.

Ethics are closely related to what evolutionary theorists call proximate cause and ultimate cause. Proximate cause is "I eat because I am hungry", while ultimate cause is "If I didn’t get hungry, I wouldn't eat, I wouldn't survive, and neither would my gene pool". If we invoke ultimate cause, plenty of behaviour that seems irrational (locally) is actually quite rational (ultimately). Among other things, this explains morality, altruism, and ethics, and is why these traits have been so evolutionarily successful.

By now I hope you can see that it's actually quite easy to think of prisoner-dilemma-like situations: all we have to do is imagine scenarios where everyone is better off when everyone behaves ethically. Here are some:

  • Example: Why would someone take a small risk to prevent a stranger from getting hit by a bus? Because the tendency to care about the well-being of others, in the long-run, puts us all where we are today.
  • Example: Why do people vote when their votes are unlikely to change the outcome? Because the tendency to vote makes our collective decisions more representative.
  • Example: Why didn't my neighbour rob my house when he knew I was on vacation? Why didn't that passerby steal an apple from the fruit stand instead of paying for one? Because the tendency to respect property rights allows our trade-based society to function.
  • Example: Why don't people kill other people they disagree with? This one's obvious, right?

The irrational thing to think is that a person behaving ethically in any of the above examples is "irrational". Get your head out of the sand.

So how should we think about ethics and elections? Well, this is how I think about it:

Here we have a prisoner's dilemma (voter's dilemma?) that is inherent to any democratic election. It is simply not in Voter A's interest (locally) to vote out of concern for Voter B, and vice versa. But ultimately, it is in each of their interests: both would be much better off if both voted with concern for the other. And since voting happens at scale, this voter's dilemma is nearly impossible to overcome without some sort of ethical social norms. And that is why, you see, it is unethical to be apathetic. Not only that, if you're not behaving ethically, then you're free-riding off everyone that is.

I have often said that understanding society can be reduced to understanding the "initial value problem" of morality. In other words, morality plays an important part in determining where society is headed. In a sense, morality is the ultimate engineering problem. Given the morality we've seen in this past election, what can we say about where US society is headed? Are you apathetic and, therefore, behaving unethically?


“I don't want to sell credit to people who are going to hurt themselves with it. You should only sell products that are good for the people who use them. Some disagree with this, but I know I'm right. That is to say, you're talking to a Republican who admires Elizabeth Warren.” — Charlie Munger