Nash eqm vs Subgame perfect eqm


This post is especially for the new-comers to the game theory. When I had started studying these topics, I remember being struggling to properly understand the differences and get an intuition of what's happening and I had seen my friends struggle as well. Therefore, having studied it a number of times and across years, now I feel to have developed some understanding, which I feel might be useful for you.

There are a number of resources where you can find the technicalities, so here I would refrain from all that. I will directly jump into an example and use it to discuss the concepts. Let's start!

Suppose, there are two players, me (player 1) and you (player 2). We play sequentially, with me playing first followed by you. I have two choices, play (U) or down (D). You have two choices play right (R) or left (L). The game is described in the tree format underneath. The respective payoffs are mentioned in the tree. (I hope that you know the minor technicalities like what payoffs are, what nodes and leaves are etc. If not I would suggest you google that before)




Before progressing its good to clarify the first basic point of doubt, that is, what strategy of players are. Again sidelining technicalities, in simple words strategy for me here is playing U or D, while for you is playing R or L. The sequential moves simply say that I will play first and then you would play. I am free to choose whatever I want (U or D) and so are you (L or R). After I play my move, you are free to decide. It should be understood that even if I choose U, you are still free to choose L or R. The mentioned branches have not been shown in the tree because they yield the same payoff, that is (0,2). But it doesn't mean that if I play U, you can't play your move. You surely can. The tree can be thought off as the one given below.




Now, coming to Nash equilibrium. It simply says that your response should be optimal given what I have done and vice-versa. Choosing another option will not make you better off (in a strict sense). So suppose I go D. Now it is your turn. You can go L or R. Suppose you decide to go L. For it to be Nash, as we discussed, there should not be any profitable deviation possible given what the other player is doing. Let's check. So, given I have played D, if you would have chosen R instead of L you would have been strictly better off (1>-1). Thus, your chosen response (L) is not optimal and (D, L) is not a Nash. What if you had chosen R instead of L. Then using similar argumentation we can see that given I play D, there does not exist any profitable deviation for you. But for Nash, it should be optimal for both the players, that is both you and me. So, given you have played R, was it optimal for me to have played D? Suppose I deviate and choose U (given you play R), I would not be better off (0 < 1). Remember, as we discussed, even when I play U, you are free to play R or L. Therefore, no profitable deviation exists for any of us and so (D, R) is a Nash equilibrium.

There might be other Nash eqm. We should check all possible combinations, that is, (U, L),(U, R), (D, L) and (D, R). We have already checked the last two amongst the four. Let's check the first two. Try it out yourself before progressing.

So, if I play U and suppose you play L, does any profitable deviation exist? Given I play U, you would be indifferent between playing L or R (2 = 2). So, surely not worse off. If you play L, and suppose I deviate by playing D will I be better off? No (0>-1). So, we see that no profitable deviation exists, thus (U, L) is another Nash. How about (U, R). Using the similar procedure and argumentations, we can find that if I play U, you would be indifferent towards playing R or L. But given you play R, if I play D, I would be strictly better off (1>0). So, a profitable deviation exists. Therefore, (U, R) is not a nash.

Thus, we found that the given game has two NE, namely (U, L) and (D, R)

Now, what about Subgame Perfection. It basically means that the response is optimal at every sub-game. What I mean by this is, have a look at our NE, (U, L) and (D, R). Now observe your subgame (Player-2). Note there are 2 subgames in the original tree. We should not count sub-game from the tree we made because technically the game ends when we play U. The added branches are only for our intuition. Having said that, the subgame perfect eqm should be the optimal choice in every subgame. Thus, for sub-game of player-2 L is not optimal, as it could choose R and be better off. Note here we do not talk about whether I play U or D, but rather I start from the bottom-most subgame (backward induction). Now, moving to the game above this level, in our case that's the game. Given your (player-2) optimal action is to play R, now for me (player-1), the tree would look like the one given below (for intuition)






It can be clearly seen that I would like to play D rather than U since playing D gives me higher payoff (1>0). Thus, only (D, R) is SPNE and not (U, L). This Nash which is not SPNE is also called as an incredible threat in cases.

It is in this way Nash and Sub-game perfect eqm differs. The main point to note and always remember is that equilibrium means the optimal response given rivals strategy.

Comments