# Be Ye Friend Or Be Ye Foe? Part 3

The Story So Far…
Two players are presented with an opportunity.  Each may remain loyal to the other player and betray her. Neither player will interact with the other in any way ever again. There be no out-of-game way to be rewarded and punished.

What could happen?

(1) If both remain loyal, each of them gets the “Cooperation” reward.

(2) If both betray the other, each of them gets the “Betrayal” reward.

(3) If one betrays the other while the other remains loyal, the betrayer gets the “Traitor” reward while the loyal one gets the “Sucker” reward.

This is the prisoner’s dilemma.

In a strict prisoner’s dilemma, the rewards are staggered with the “Traitor” reward best, followed by the “Cooperation” reward, then “Betrayal,” and finally “Sucker” the worst.  In mathematical terms, T > C > B > S.

Further, cooperation is much more likely if this decision point occurs repeatedly but obscures the exact number of times it will occur.

Correctly Staggering Rewards

In a strict prisoner’s dilemma, the rewards are staggered with the “Traitor” reward best, followed by the “Cooperation” reward, then “Betrayal,” and finally “Sucker” the worst.  In mathematical terms, T > C > B > S.

We discussed the Friend or Foe game show in detail in our last column.  Notice that this game deviated from this structure a bit–the “Betrayal” reward and the “Sucker” reward were identical; going home with no money.  Returning to mathematical terms, T > C > B = S.  If you opt to have two identical outcomes, this is the place to do it–at the bottom.  Notice also that in their scheme, T + S = C + C = total prize money.  Considering the demands of a of a game show–the need for clear rules that are easily parsed by the audience viewing at home–I can certainly see why they made this decision.

Personally, I would go a bit further than the standard T > C > B > S. I would also aim for 2C > T + S > 2B.  This was the reward structure of the peace war game and it suits my general design style.  Let’s go back to our Friend or Foe game show for an illustration.

Two players are going into the final showdown.  They have amassed \$1000 in prize money.  Here are the possible outcomes under the standard game rules:

A) If both vote friend, they each get \$500.

B) If both vote foe, they each get \$0.

C) If one votes friend and the other votes foe, the foe gets \$1000 and the friend gets \$0.

Now imagine that we make a small tweak to the rules.  If both players choose “Friend,” we’ll throw in an extra 10%.  Now the decisions are:

A) If both choose friend, they each get \$550.

B) If both choose foe, they each get \$0.

C) If one chooses friend and the other chooses foe, the foe gets \$1000 and the friend gets \$0.

This is a small change but it has broad implications for the players.  If the players consistently choose friend, they end up collectively further ahead on repeated plays–\$550 + \$550 = \$1100–than any other case–\$1000 + \$0 = \$1000 in case (B) and \$0 + \$0 = \$0 in case (C).  Of course, this game show doesn’t have repeated plays.  This decision is the last one of the game. But these players are conscientious, not amoral.  And that makes it all the more challenging for our players.

To Be Continued…

Our next column addresses ways to put Friend or Foe mechanisms into our own designs. See you Tuesday!

Have you played a game with a Friend and Foe mechanism? What did you think of it? Have you written one? How did your players respond to it?  Share with your fellow readers in the comments below.  And if you’re enjoying what you’re reading, create an account with WordPress and follow this blog.  You keep reading. I’ll keep writing.

## 4 thoughts on “Be Ye Friend Or Be Ye Foe? Part 3”

1. I recently had a discussion with a friend about adding this sort of rewards system to his game, so I’m glad to see it’s getting treatment in this series so I can point him to it!

Re: games that use a friend/foe mechanic, I think Cosmic Encounters is a good example. During a battle you can both choose to play a battle cards, which will result in ships lost for one side, a negotiation whereby players will negotiate some sort of terms instead of fighting, or a betrayal where one person plays a negotiate and one plays a battle card. In the final case the person playing a negotiate loses the battle, but gets a card from the winner as recompense. I think the system works well, especially since Cosmic Encounters breeds so much distrust among players anyway. And there’s a hidden consequence of the mechanic as well: the fact that people are going to remember who did them dirty for future games!

2. I strongly recommend Robert Axelrod’s book “The Evolution of Cooperation.” Axelrod organized a competition on what is the best strategy for repeated play of the Prisoner Dilemma game.

Participants submitted a program that contained a prewritten strategy on how to play the game that competed with other people’s programs. The winner was a strategy known as tit for tat. If you rat me out this round then I will rat you out next round; however, if you do not rat me out last round then I will not rat you out this round.

You already allude to Dixit’s work on the topic, but I thought it was important to highlight why repeated play can lead to cooperation. The design of the game empowers players to punish other players for not cooperating by refusing to cooperate in the future.

An extreme case is what I will call the hair trigger strategy. I will not rat you out until you rate me out, after which I will rat you out for the rest of the game. The player facing another player using the hair trigger strategy will calculate the long term return of cooperation with the short term benefits of ratting someone out. Depending on the discount rate of the player (and knowing when the game ends) this results in long term cooperation.