# Live and Let Live

July 26, 2017 5:42 AM Subscribe

This is a playable explanation of how we trust, why we don't, and who wins, through the lens of the Iterated Prisoner's Dilemma[wiki]. Made by Nicky Case, n.b. music will play when you press Start.

Bit odd that in one step it says "sometimes a few grudgers stick around" because for me they take over.

I'm also suspicious of how it doesn't randomise amongst equal scores and maintains the list order in killing off poor performers.

posted by edd at 6:31 AM on July 26

I'm also suspicious of how it doesn't randomise amongst equal scores and maintains the list order in killing off poor performers.

posted by edd at 6:31 AM on July 26

So I click Cooperate as my first real move and the game just hangs while continuing to play soothing on-hold music at me.

This explains much about humanity's relationship to technology.

posted by flabdablet at 6:38 AM on July 26 [3 favorites]

This explains much about humanity's relationship to technology.

posted by flabdablet at 6:38 AM on July 26 [3 favorites]

This is really cool! It brings to mind many variations (which may have already been explored, I dunno):

First, what about a style of iterated game in which players have some degree of control over who they interact with? That could be done in any number of ways. Currently everybody interacts with everybody in every round. But you could imagine alternating between two sorts of decisions: "in-game" decisions that control whether you cooperate or defect with the person in front of you, and "matchup" decisions that affect the probability distribution of what other players you interact with. That still leaves a few parameters for the simulation to fix: How quickly can you change your matchup distribution? Can you see other people's games (or, how well can you see other people's games)?

Second, what happens when the payoff matrix is subject to noise per-game, or when different agents have different views of the payoff matrix? What new strategies might make sense under noisy or uncertain payoff matrices? Maybe you think we're playing Chicken and I think we're playing Volunteer's Dilemma. (And if we really want to get weird, maybe

posted by Jpfed at 6:45 AM on July 26 [4 favorites]

First, what about a style of iterated game in which players have some degree of control over who they interact with? That could be done in any number of ways. Currently everybody interacts with everybody in every round. But you could imagine alternating between two sorts of decisions: "in-game" decisions that control whether you cooperate or defect with the person in front of you, and "matchup" decisions that affect the probability distribution of what other players you interact with. That still leaves a few parameters for the simulation to fix: How quickly can you change your matchup distribution? Can you see other people's games (or, how well can you see other people's games)?

Second, what happens when the payoff matrix is subject to noise per-game, or when different agents have different views of the payoff matrix? What new strategies might make sense under noisy or uncertain payoff matrices? Maybe you think we're playing Chicken and I think we're playing Volunteer's Dilemma. (And if we really want to get weird, maybe

*I know*you think we're playing Chicken, but I still think we're playing Volunteer's Dilemma...)posted by Jpfed at 6:45 AM on July 26 [4 favorites]

Thanks -- this was interesting. I never really felt like I got this stuff when we studied it in Trade Regulation, and I still don't 100% feel like I do, but I'm a lot closer to understanding it now.

posted by jacquilynne at 7:47 AM on July 26

posted by jacquilynne at 7:47 AM on July 26

Ooh...it's time for a little GAME THEORY. (For real.)

posted by adamgreenfield at 7:48 AM on July 26 [1 favorite]

posted by adamgreenfield at 7:48 AM on July 26 [1 favorite]

jpfed, did you just reinvent Diplomacy?

posted by rebent at 8:46 AM on July 26 [1 favorite]

posted by rebent at 8:46 AM on July 26 [1 favorite]

Sure, I trust you, just cut the cards.

posted by AugustWest at 8:48 AM on July 26

posted by AugustWest at 8:48 AM on July 26

Jpfed: "

I believe that the easiest way to model this is probably to just add an option to each game for both players that acts as a unilateral veto--if either player selects this option instead of cooperate or cheat, the payoffs for that game are 0 for both players. But as you say, there are a lot of ways to do it.

The second question is easier so: what happens when different agents have different views of the payoff matrix? Well, the beauty of non-zero-sum games is that that is the assumption. The parenthetical notation is just a way of representing that--if you're the first player, you cover up all the second values and that's your payoff matrix, vice versa for second player. The value of a particular joint-strategy for you has no impact whatsoever on the value that I care about. And in general, this abstraction takes care of a lot of problems in differing information or values between the players--all you're ever concerned about as

On the other hand, what if the value of strategies is not fixed but corresponds to some probability distribution (uncertainty), as in your first question? It's worth noting that this is actually the case in this simulation! A mistakenly swapped choice can be viewed as a distribution with 4 outcomes (both players can err) where you get your intended payoff when neither player makes a mistake. So this is super germane to the question at hand, but the bad news is that some googling isn't revealing much. (Although I'm certainly not an expert so there may be a term of art I'm missing.) It looks to me like this is a relatively unexamined question.

posted by TypographicalError at 12:34 PM on July 26

*First, what about a style of iterated game in which players have some degree of control over who they interact with? That could be done in any number of ways. Currently everybody interacts with everybody in every round. But you could imagine alternating between two sorts of decisions: "in-game" decisions that control whether you cooperate or defect with the person in front of you, and "matchup" decisions that affect the probability distribution of what other players you interact with. That still leaves a few parameters for the simulation to fix: How quickly can you change your matchup distribution? Can you see other people's games (or, how well can you see other people's games)?*I believe that the easiest way to model this is probably to just add an option to each game for both players that acts as a unilateral veto--if either player selects this option instead of cooperate or cheat, the payoffs for that game are 0 for both players. But as you say, there are a lot of ways to do it.

*Second, what happens when the payoff matrix is subject to noise per-game, or when different agents have different views of the payoff matrix? What new strategies might make sense under noisy or uncertain payoff matrices? Maybe you think we're playing Chicken and I think we're playing Volunteer's Dilemma. (And if we really want to get weird, maybe*"*I know*you think we're playing Chicken, but I still think we're playing Volunteer's Dilemma...)The second question is easier so: what happens when different agents have different views of the payoff matrix? Well, the beauty of non-zero-sum games is that that is the assumption. The parenthetical notation is just a way of representing that--if you're the first player, you cover up all the second values and that's your payoff matrix, vice versa for second player. The value of a particular joint-strategy for you has no impact whatsoever on the value that I care about. And in general, this abstraction takes care of a lot of problems in differing information or values between the players--all you're ever concerned about as

*homo gametheoreticus*is maximizing your own score without regard to others' lot.On the other hand, what if the value of strategies is not fixed but corresponds to some probability distribution (uncertainty), as in your first question? It's worth noting that this is actually the case in this simulation! A mistakenly swapped choice can be viewed as a distribution with 4 outcomes (both players can err) where you get your intended payoff when neither player makes a mistake. So this is super germane to the question at hand, but the bad news is that some googling isn't revealing much. (Although I'm certainly not an expert so there may be a term of art I'm missing.) It looks to me like this is a relatively unexamined question.

posted by TypographicalError at 12:34 PM on July 26

The keyword for this type of question - the population changing over time as a result of variances in payoffs - is "replicator dynamics". I think Wikipedia's introduction is not great (unusually for wikipedia), but I see plenty of introductory pdfs on Google. To answer the question about randomly varying payoffs, the keyword would be "stochastic replicator dynamics", which I suspect is closely related to stochastic differential equations, a well-worn topic.

The simulations done in the link (and when I teach this concept...) aren't quite that though, because they're discrete. The replicator equation is a differential equation - the population size is changing continuously through time. The discrete version means that either you take smallish steps in time, or you consider finitely many individuals, or both (the link does both). The easiest way to approximate these dynamics yourself is to take discrete time steps and do an Euler approximation to the solution - something that would work in the stochastic case too.

I should also add, the link is largely cribbing the work of a researcher named Robert Axelrod. He's got a book called "The Evolution of Cooperation" which I haven't read, and a number of more technical papers, which I have. The only addition (other than slick presentation) I see the link making is this idea that it's your number of close friends which determines your likelihood of repeating play with a given individual, and the gloomy conclusion they reach as a result. I would take that with a seriously huge grain of salt though! Like, a massively huge grain of salt.

posted by dbx at 2:01 PM on July 26 [2 favorites]

The simulations done in the link (and when I teach this concept...) aren't quite that though, because they're discrete. The replicator equation is a differential equation - the population size is changing continuously through time. The discrete version means that either you take smallish steps in time, or you consider finitely many individuals, or both (the link does both). The easiest way to approximate these dynamics yourself is to take discrete time steps and do an Euler approximation to the solution - something that would work in the stochastic case too.

I should also add, the link is largely cribbing the work of a researcher named Robert Axelrod. He's got a book called "The Evolution of Cooperation" which I haven't read, and a number of more technical papers, which I have. The only addition (other than slick presentation) I see the link making is this idea that it's your number of close friends which determines your likelihood of repeating play with a given individual, and the gloomy conclusion they reach as a result. I would take that with a seriously huge grain of salt though! Like, a massively huge grain of salt.

posted by dbx at 2:01 PM on July 26 [2 favorites]

*I believe that the easiest way to model this is probably to just add an option to each game for both players that acts as a unilateral veto*

It seems like your idea would slot in nicely with the existing theoretical and computational machinery.

Note that doesn't allow a harassment/predation matchup where one player unilaterally says "Imma fuck with this rube". To allow harassment/predation, I imagined every player having a list of "social connections", where the list is initialized to contain every other player. A player could, every so often, replace one of the social connections in their list with one that points to a different player (this does not affect any other player's social connection list: I might choose to avoid you, but that does not mean that you choose to avoid me). When the system needs to choose two individuals to pair up, it selects a connection uniformly at random from the concatenation of all connection lists (equivalently, selects a "starting player" and a random connection from their list).

*Well, the beauty of non-zero-sum games is that that is the assumption.*

No, that can't be true. In normal games as I've seen them presented, when I decide my strategy, I know my payoffs *and* your payoffs, and I assume that you know your own payoffs *and* my payoffs. That's how I can simulate you before I choose my action: as I iteratively second- and third- guess my actions and your best reaction to them, I need to know what you think your payoffs are going to be if I'm actually going to settle on one of the Nash equilibria.

But as I attempted to pose in my second question, there is could be mutual misunderstanding which leads the players to incorrectly simulate one another: both of us may have incorrect notions of both of our payouts.

posted by Jpfed at 8:18 PM on July 26

*I should also add, the link is largely cribbing the work of a researcher named Robert Axelrod. He's got a book called "The Evolution of Cooperation"*

If you get to the end, the link explicitly acknowledges that it is based on this work (you can also skip to the end by clicking on the circles at the bottom). So it's not "cribbing" because he does attribute this research to its creator.

He also links to a recent Time Magazine article about that 1914 Christmas in the trenches, which is a neat read.

posted by obscure simpsons reference at 8:54 PM on July 26

There's a missing variable here though.

What if each player gets only 10 coins to start with.

In this case when a co-operator plays with a co-operator they get more each round, result: infinite coins.

But if a co-operator plays with an always cheat they get 10 rounds and then the other person is out of cash.

Result: Co-operator has zero, always cheat has 30.

Play that version over and over and see who wins?

posted by Just this guy, y'know at 3:34 AM on July 27

What if each player gets only 10 coins to start with.

In this case when a co-operator plays with a co-operator they get more each round, result: infinite coins.

But if a co-operator plays with an always cheat they get 10 rounds and then the other person is out of cash.

Result: Co-operator has zero, always cheat has 30.

Play that version over and over and see who wins?

posted by Just this guy, y'know at 3:34 AM on July 27

(SPOILERS)

Huh, so after the part at the beginning where you play each of the original 5 types of players you get to a screen that says

posted by 23skidoo at 6:32 PM on July 28

Huh, so after the part at the beginning where you play each of the original 5 types of players you get to a screen that says

*"(the lowest & highest possible scores are 7 and 49, respectively)"*. I figured out how to get the highest possible score, but how do you get the lowest possible score of 7?posted by 23skidoo at 6:32 PM on July 28

« Older When you try to whip them forwards, they buck you... | Gin to Me is Home Now Newer »

This thread has been archived and is closed to new comments

posted by J.K. Seazer at 6:25 AM on July 26