# The Traveler's DilemmaMay 30, 2007 8:56 AM   Subscribe

"He asks each of them to write down...any dollar integer between 2 and 100 without conferring together. If both write the same number...he will pay each of them that amount. But if they write different numbers, he will ... pay both of them the lower number along with a bonus and a penalty--the person who wrote the lower number will get \$2 more...and the one who wrote the higher number will get \$2 less.... For instance, if Lucy writes 46 and Pete writes 100, Lucy will get \$48 and Pete will get \$44."
What amount would you choose? And what does your answer tell us about the limits of Game Theory?
posted by empath (245 comments total) 21 users marked this as a favorite

50 bucks. I'm probably wrong, cause I always am about this sort of stuff, but I'd do it because I would think it would be the most likely to have been guessed by another person who was trying to guess what I was thinking.
posted by facetious at 9:08 AM on May 30, 2007

Damn it facetious, you just cost me \$52!
posted by Pollomacho at 9:11 AM on May 30, 2007

90, and hope that my partner is also a greedy fuck.
posted by flibbertigibbet at 9:12 AM on May 30, 2007

I fail to see how logic leads us to that point (that the article says it should)--if the players are logical enough to realize they could earn a bonus, they should also be logical to realize how following this approach indefinitely will leave them with a greatly reduced reward in the end. Just because a response is "rational" in one context does not mean that rejecting that rational response is irrational or that there are not other rational choices that may give better results.

Disclaimer: I am slightly hungover and my coffee hasn't finished brewing yet. How this affects the rationality of the above statement remains to be seen.
posted by Benjy at 9:13 AM on May 30, 2007

100
posted by papercake at 9:14 AM on May 30, 2007 [2 favorites]

Yeah, 100. People, I'm telling you now, if any of you run into this situation with me in the future, let's both choose 100.
posted by Greg Nog at 9:16 AM on May 30, 2007 [9 favorites]

I'm disappointed that the article doesn't address the solution Douglas Hofstadter proposed (in his column in Scientific American, no less) ~20 years ago, to answer the question of why people don't choose the "rational" Nash equilibrium of 2.

Hofstadter proposed what he called (IIRC) hyperrationality - not only that one's behavior is rational, but that the behavior of all players is rational, and all players know that all players are rational, and all players know that all players know that all players are rational, ad infinitum.
posted by DevilsAdvocate at 9:17 AM on May 30, 2007 [3 favorites]

100 clearly. At the very least you would have the smug superiority of knowing that you were trying to do what's best for both of you.
posted by sourbrew at 9:18 AM on May 30, 2007

Why would anyone not choose \$100?
posted by rusty at 9:18 AM on May 30, 2007

Sorry, double post, and without a completed comment, no less! I meant to continue as follows:

Since the game is symmetrical, a hyperrational player may conclude that another hyperrational player will come to the same choice that he does; knowing that both players will make the same choice, it is easy for the hyperrational player to see that 100 is the correct choice.
posted by DevilsAdvocate at 9:19 AM on May 30, 2007

Benjy Agree. Also I think the "logic" here requires that we think the participants have a greater desire to harm the other than they do to gain for themselves.

If I select 2, and you select any number greater than 2, I get \$4 and you get squat. But why would my desire to screw you be greater than my desire to get money?

The difference between \$100 and \$101 is so small that I can't see anyone chosing \$99 simply to a) get an extra buck, and b) screw the other guy out of \$4. Why should I bother?
posted by sotonohito at 9:20 AM on May 30, 2007 [1 favorite]

The game assumes people would rather get more money than their partner than get as much money as possible, an assumption which is surely false.

Only a player obsessed with obtaining a greater reward than the other would wind up at the Nash equilibrium of 2. Otherwise, 100.
posted by BorgLove at 9:21 AM on May 30, 2007 [2 favorites]

100 ... the next time i'd play, i'd pick 100 ... and keep picking 100, as eventually the other person would catch on
posted by pyramid termite at 9:21 AM on May 30, 2007

I would choose \$2, 'cause damn did you see the piece of shit car this guy drove up in?
posted by NationalKato at 9:21 AM on May 30, 2007

Why would anyone not choose \$100?

Because if I think everyone will choose \$100, then I can increase my payout by choosing \$99 - I'll end up with \$101. So, by that logic, everyone should choose \$99, but then I should choose \$98, etc, etc...
posted by Bort at 9:22 AM on May 30, 2007

100, right? You're assured the greatest gain if you choose the highest number. Is this not completely obvious, or am I just a super-genius?
posted by Pecinpah at 9:22 AM on May 30, 2007

100. It maximises the pain for the insurance guy, who needs to be punished for thinking up such an infernal scheme.
posted by edd at 9:22 AM on May 30, 2007

100, right? You're assured the greatest gain if you choose the highest number. Is this not completely obvious, or am I just a super-genius?

No, if the other person says any number under 100, you get penalized \$2. But I'd still say 100 every time. A \$2 penalty isn't high enough to deter me from trying to game the system.
posted by Terminal Verbosity at 9:25 AM on May 30, 2007

Why would anyone not choose \$100?

Only a logician would be stupid enough not to choose \$100.

Knowing, based on the article, that most people pick \$100, I would also pick \$100, since I would not be afraid that my counterpart would be an idiot logician who would compromise our position by arguing his way down to \$2.

The logic used to get to \$2 is the same kind of stupidity that helps MeFi threads about religion, politics, and a great many other things devolve into stupidity.
posted by The World Famous at 9:25 AM on May 30, 2007 [13 favorites]

I'd start by choosing \$100, since it offers very nearly the highest possible payoff regardless of the other player's strategy.

Since there's no (apparent) incentive in this game for the other player to lowball her number, I'd assume that she, too, would choose a number close to or equal to \$100.

If she chooses \$100, we both maximize our payoffs.

Even if the other player maximizes her payoff at my expense and chooses \$99, receiving \$101, I'm still up \$97, giving her only a 4% advantage. I'd be happy with \$97.

If she lowballs her number to punish me, say she chooses \$2, then I get nothing, an "infinite" advantage. But then she gets \$4 — practically no payoff, either. It's probably not a good long-term strategy for either of us.

My strategy would assume mutual self-interest would dominate over vindictiveness, since we can all make a lot of money off this very generous researcher, if we more or less agree to get along.

Nonetheless, if we played this game repeatedly, I might change my strategy to occasionally choose \$99, if my opponent does so. We might then cycle to ever-lower "bids", and probably settle on some equilibrium value, occasionally underbidding each other slightly to score a temporary advantage.
posted by Blazecock Pileon at 9:26 AM on May 30, 2007

Nthing those who are confused that anyone would chose something other than \$100. I think there needs to be a bigger bonus for the under-bidder, or some other penalty for greed.

You both have to chose a number - whoever choses the lower number gets the dollar value of the higher of the two chosen numbers.

If you both chose the same value, you both get nothing.

What number do you choose?
posted by Jon Mitchell at 9:26 AM on May 30, 2007 [5 favorites]

The problem with game theory in this instance is that it assumes perfect rationality in what is basically a lottery question. The travelers have all the information. They can make out like bandits because of the other person's ignorance. People with that option aren't going for the "perfectly rational" choice. They are going for the "lets screw that guy" choice.
posted by dios at 9:29 AM on May 30, 2007

I'd pick 100 without a second thought. If my counterpart picked 2, I'd be really pissed off. The fact of the matter is that picking 2 is only rational in the coldest, most hyper-logical HAL 9000 definition of the word, and you'd need to overthink countless plates of beans to reach that point.

Game theorists who wonder why everyone doesn't pick 2 are so wrapped up in their field that they forget how people actually think. A normal person would never think "If my counterpart picks X, I must pick X minus 1," and then chase it all the way to the bottom. Picking 100 or a number close to it may not be the "rational" choice, but it's the one that stands the highest likelihood of maximizing reward. People care about maximizing their reward a whole lot more than they care about selecting the optimal rational choice.

Oh, and as to why people pick 100 even though 99 dominates? It's because in order to determine that 99 dominates 100 you need to puzzle things through. Trying to squeeze that last dollar out of the problem takes more effort and analysis than most people would consider worth the reward. Avoidance of excess labor is a factor that most normal people tend to consider.

So let's break it down. Picking 2 requires a lot of thought and drastically reduces potential gain. Picking 99 requires some thought and maximizes potential gain. Picking 100 requires the least thought and has a potential gain almost as high as 99. It's no wonder that 100 is the most common answer.

posted by Faint of Butt at 9:30 AM on May 30, 2007 [16 favorites]

\$100. The value of possibly getting more than the other guy isn't high enough for me to race him to the bottom to do it.
posted by tula at 9:30 AM on May 30, 2007 [2 favorites]

Traveler's Dilemma (TD) achieves those goals because the game's logic dictates that 2 is the best option,

Only if your desire is to "win" (get more than the other person) rather than win (get the most money you can).

This "dilemma" seems completely stupid to me.
posted by papakwanz at 9:31 AM on May 30, 2007 [2 favorites]

I think there are really only two strategies, Low and High. If I thought there was a better than 10% chance of the other guy going high, I'd go high. If I was POSITIVE that the other guy was an asshole, I'd choose 2.
posted by empath at 9:33 AM on May 30, 2007

tit for tat is the way to go.
posted by SaintCynr at 9:34 AM on May 30, 2007 [1 favorite]

It seems that game theorists assume that each player wants to "beat" the other player, whereas a participant in the traveler's dilemma just wants as much compensation as possible for his broken item. If the other traveller gets more money than the other, who cares? It's better to be "beaten" by the other traveler than to have less money in your pocket.

The "game theory" model inappropriately casts the other traveler as the opponent. In fact, the airline manager is the opponent, and the goal is to extract as much money as possible from him. That can only occur when both participants choose "100."
posted by deanc at 9:34 AM on May 30, 2007 [3 favorites]

Btw, I think the world would be a much better place if we all agreed that if we ever get into this one, or an analogous one, we would all pick \$100.

And also, that if anyone picks 2, we stone them.
posted by empath at 9:35 AM on May 30, 2007 [3 favorites]

hunnerd.
posted by BlackLeotardFront at 9:36 AM on May 30, 2007

I would love to have an explanation of how some game-theorists are so lacking in meta-cognition or insight into real games and real rationality. Is it simply a fetishisation for mathematical models or something deeper and more perverse? Perhaps a panic-defense at the inadequecy of their ability to represent actual rational cognition in simple toy models.
posted by MetaMonkey at 9:36 AM on May 30, 2007 [1 favorite]

Seems to me I'd pick 100, and then when the other person picked 2, I'd shoot them in the chest and pick 100 again. The next participant would be far more cooperative.
posted by aramaic at 9:36 AM on May 30, 2007 [3 favorites]

(uh...into this situation or an analogous one)
posted by empath at 9:36 AM on May 30, 2007

\$100. Duh.

Given a choice of maximizing the reward for both of us or simply beating my "opponent" I would maximize the reward for both of us. Beating your opponent only matters if you're playing with monopoly money or if there is some secondary greater real cash reward for winning the game.

posted by lordrunningclam at 9:37 AM on May 30, 2007

\$100
posted by autodidact at 9:38 AM on May 30, 2007

The scary part is that game theory was what our government was depending on to save us all from nuclear annihilation at the hands of the Soviets.
posted by empath at 9:39 AM on May 30, 2007

God, I hate rational choice theory.

Since, in the scenario, the players are asked to imagine that they are playing to recoup the loss of a broken antique, wouldn't they assume that they had to guess a number higher than the imagined cost of the antique in order to actually make a profit? If they chose \$2 because it is the "rational choice," then they would have to also assume that would be losing money because the airline damaged a possibly valuable belonging.

I think that the outcome of the game would change if the item in question were something clearly less valuable, but the possible payout was the same, if for no other reason than the players would feel ashamed at the fact that they were obviously liars. In the real world, decisions are based on emotions, not just on logical calculations. Is this fact really that shocking?
posted by exacta_perfecta at 9:39 AM on May 30, 2007

Sure, if you choose \$2, you can definitely get \$2 and maybe even \$4. But if you went with \$100, or \$50, or anything high, there's the chance that you would end up with nothing, but there's also the chance that you would end up with \$100. A potential \$100 is more fun than a certain \$2.
posted by drezdn at 9:39 AM on May 30, 2007

You both have to chose a number - whoever choses the lower number gets the dollar value of the higher of the two chosen numbers.

If you both chose the same value, you both get nothing.

What number do you choose?

first time 100 ... second time 99 ... 3rd 100 ... 4th 99 ... and so on

it would take a brain dead opponent not to come up with the obvious counterstrategy ... one that lets both of us win
posted by pyramid termite at 9:40 AM on May 30, 2007 [1 favorite]

I think in this case that means you'll end up with more money than someone who does.
posted by ElmerFishpaw at 9:41 AM on May 30, 2007 [3 favorites]

Drezdn: I think you nailed it. Rational Choice theory would seem to predict that no one would ever play slots or roulette, and yet they do, in vast numbers.
posted by empath at 9:41 AM on May 30, 2007 [2 favorites]

Man, if I chose \$100 and my partner chose \$99, I'd be like, "Congratulations. You are willing to look like a dick for the princely sum of one dollar."
posted by Greg Nog at 9:43 AM on May 30, 2007 [7 favorites]

He's playing possum, folks. This is like ""Big Deal In Dodge City," only with nerds instead of cowboys.
posted by Jofus at 9:43 AM on May 30, 2007 [2 favorites]

For chrissake, \$100 is the only right answer. Nobody cares about winning \$98 rather than \$102.
posted by hodyoaten at 9:44 AM on May 30, 2007

Would you do this differently if you knew for a fact that this was a one time 'game' and you would never see any of these people again than you would if you were going to play 5 more times with the same people?
posted by empath at 9:45 AM on May 30, 2007

Part of the problem with this article and it's use of the Traveler's Dilemma is it doesn't consider risk and that's why the results don't match the Nash Equilibrium.

Compare using the value of 2 (NE) to 100. Worst (logical) case with 2 is I get 2 dollars (this assumes nobody would pick 1 or 0). Worst case with 100 is I get no dollars (other person picks 2).

So the difference in worst-case, in terms of money, between the two choices is 2 dollars. The difference between best-case is 98 dollars. Those willing to take risk will have higher rewards. So what level of risk is a person willing to take? In this case two bucks doesn't mean much to me. 100 bucks does. So I'll always pick 100.

Another miss is on goal assumption. What's the goal of the game? To get more money than the other guy or to get the as much money for yourself as possible? I pick 100 and you pick 99? Great, you win 101 but I also get 98 bucks. Why do I care if you get 3 bucks more? I've maximized the amount of money I could walk away with.

I think this whole article is narrow-minded and lacks factoring in risk versus reward and assumed goal.
posted by ruthsarian at 9:45 AM on May 30, 2007 [2 favorites]

On reading more of the article, it seems they did other experiments where the payoff ranged from 80 to 200. In that case, since the goal of a real-live player is to end up with some compensation, it becomes more rational to go down to 80, which is what the participants did after multiple trials.

As drezdn said, a potential \$100 is more fun than a certain \$2. However, a certain \$80 isn't that much worse than a possible \$200, depending on how large the penalty is.
posted by deanc at 9:47 AM on May 30, 2007

The sociology of this scenario and the Prisoner's Dilemma are interestingly different, and on that account, this Dilemma seems far more an illogical failure. Why would any traveler desire \$2 more than her companion, at the companion's expense of \$2? Better to milk the corporate cash cow whose error it was, right? On the other hand, just being arrested and in a jail cell tends to cast a pall of deserved punishment on a Prisoner, justifying somewhat any vindictive moves a player in that Dilemma might make against the other.
posted by Ambrosia Voyeur at 9:47 AM on May 30, 2007

Why would anyone not choose \$100?

My understanding from the article: because the penalty isn't very large. If the range was from \$50 to \$100 with a \$50 reward/penalty, you might be tempted to underbid my 100 by saying 99. Result: \$149 for you, \$49 for me

Problem with Hofstater's hyperrationality: I don't trust the other guy to act that way, e.g. facetious and flibbertigibbet.

I still choose \$100 though.

With a \$2 reward/penalty, I think: "What have I got to lose? If I choose the \$2, John Nash can't screw me out of that, but the most I'll get is enough to buy a burger and fries. But if I choose \$100, Doug Hofstadter and I could both walk away with \$100, and all I stand to lose is my cheeseburger money"

But if the reward/penalty was \$50, and I don't know if it's John or Doug in the other room, it's a much tougher choice. I might guarantee myself at least the minimum by choosing the Nash strategy, rather than gambling that on the maximum by choosing the Hofstadter strategy
posted by rossmik at 9:48 AM on May 30, 2007 [2 favorites]

Adam Curtis' documentary The Trap (discussed previously) explores some of the issues around this; how these kind of simple and narrowly-applicable mathematical-economic theories of rationality have been abstracted far beyond their usefulness, to become not only accepted but pervasive, economically (and politically) accepted dogmatic truth in spite of their self-evident contradiction to observed reality. Also it's now on google video.
posted by MetaMonkey at 9:50 AM on May 30, 2007 [9 favorites]

Would you do this differently if you knew for a fact that this was a one time 'game' and you would never see any of these people again than you would if you were going to play 5 more times with the same people?

I wouldn't have the opportunity to learn the other's strategy. So I'd base my decision, again, on maximum payoff, since the minimum payoff is nearly worthless.

There's practically no return on choosing \$2, but I'd be happy bidding \$100 and winning either \$100 or \$97, on the assumption that the other player shares my dollar valuation system.
posted by Blazecock Pileon at 9:50 AM on May 30, 2007

On the other hand, just being arrested and in a jail cell tends to cast a pall of deserved punishment on a Prisoner, justifying somewhat any vindictive moves a player in that Dilemma might make against the other.

Nash didn't consider the shiv strategy.
posted by Blazecock Pileon at 9:52 AM on May 30, 2007 [2 favorites]

I think this whole article is narrow-minded and lacks factoring in risk versus reward and assumed goal.

I think the point of the article was to point out that game theorists are narrow minded. The author himself doesn't seem that surprised that people choose 100 or 99-- in fact, he seems to have set up the experiment knowing that this would be the outcome. What the article points out is that, as obvious at this might be to us, if you plug the scenario into a game theory spreadsheet, it predicts that everyone should choose \$2. Game theory itself then (if the author is to be believed) apparently lacks factoring in risk versus reward and assumed goal.
posted by deanc at 9:53 AM on May 30, 2007 [2 favorites]

empath has it. Followup games are where game theory comes into it. If you can play an unlimited number of times, \$2 is the rational choice. If you can play once, \$100 is the obvious choice. Figuring out the optimum choice for a given number of games is left as an exercise to the reader.
posted by Skorgu at 9:57 AM on May 30, 2007

Methinks their choice of back story is getting in the way of their theory. Yes? No? Reframe the question in terms of points, where whoever has the most points at the end "wins" and it might make more sense, at least to me, but then they wouldn't have a Scientific American article touting their research.
posted by Skwirl at 9:59 AM on May 30, 2007 [1 favorite]

\$100. Btw, wasn't game theory used in developing the mutual assured destruction doctrine of nuclear war strategy? Those uys were sure smart.
posted by RussHy at 10:01 AM on May 30, 2007

Here's the thing: the theory doesn't serve this contrived example well, and the realm of ways in which game theoretic models end up at odds with real human behavior is a fascinating one, but the example can be put into a context where it does make sense.

Assume one trial a day. Assume you need medicine that costs \$2 a day. Assume you will die if you do not get your medicine, that you are broke, and that you cannot get medicine by any alternate route.

What number do you choose?

The breakdown of the model in real life situations is interesting, and there's a great big yawning gap between classical game theory and observable human behavior, but the model isn't junk. It just doesn't incorporate all these meta-model things we're talking about as presented: there's not enough at stake in the TD to drive truly mercenary, rational behavior on the part of the participants.
posted by cortex at 10:01 AM on May 30, 2007 [4 favorites]

I would choose \$69, followed by a chest-bump and a HOO HAH.
posted by The Straightener at 10:02 AM on May 30, 2007 [1 favorite]

The only thing worse than the generalizations made by game theory about people is the generalizations that people make about game theorists.
posted by vacapinta at 10:02 AM on May 30, 2007 [3 favorites]

guys, those guys were sure smart. I, on the other hand, am not so smart.
posted by RussHy at 10:02 AM on May 30, 2007

If you can play an unlimited number of times, \$2 is the rational choice.

Only if 1) you want to have more money than the other guy, and 2) you want to amass your fortune really slowly and 3) you can't trust the other guy.

If you want to get alot of money quickly, and you can trust the other dude, the rational choice is \$100.

Isn't it?
posted by 23skidoo at 10:04 AM on May 30, 2007

This somehow reminds me of the judge who tells the convict: "You are to be executed next week at 6 in the morning, on a day that will be a surprise to you."

The convict reasons "I can not be executed Saturday (the last day of the week, because I would know on Friday that I must be executed Saturday, and so it would not be a surprise. Therefore, knowing I can not be executed Saturday, I can not be executed Friday, because it would not be a surprise."

And reasoning thus, he is convinced he can not be executed.

So he is quite surprised when, on Wednesday morning, he is taken out and executed.
posted by hexatron at 10:05 AM on May 30, 2007 [4 favorites]

I'd have to side with the hyperrationals on this one. \$100, and that's my final offer!

My first reaction to this was that it was a "dating game" type of problem: I assumed that Pete and Lucy were a couple, and that the goal was for them to maximize their collective income. The insurance guy's evil scheme, clearly, was to penalize them for disloyalty to each other!

On a related note: don't these researchers know that altruism is hard-wired?
posted by otherthings_ at 10:05 AM on May 30, 2007

I was about to post an argument that the game has a false premise. \$100 seemed right to me. But then I actually read the whole article and found that the point is that game theory itself has flaws. \$2 is the "rational" response according to formal rules. But only a psychopath would give that answer. :)
posted by stiggywigget at 10:06 AM on May 30, 2007

I'm more interested in the version that pays between \$2 and \$1,000,000.
posted by Clamwacker at 10:07 AM on May 30, 2007

Man, if I chose \$100 and my partner chose \$99, I'd be like, "Congratulations. You are willing to look like a dick for the princely sum of one dollar."

Yup.
posted by Bookhouse at 10:07 AM on May 30, 2007

Reframe the question in terms of points, where whoever has the most points at the end "wins" and it might make more sense

and

Assume one trial a day. Assume you need medicine that costs \$2 a day. Assume you will die if you do not get your medicine, that you are broke, and that you cannot get medicine by any alternate route.

These are both situations in which "winning" is a necessary outcome. For many scenarios, winning isn't necessary. What's desired by players in the TD is simply an "acceptable" outcome, rather than a winning outcome. Game theory assumes that each player wants to win, whereas in many real-life situations, each player merely wants to "get by," which apparently creates better outcomes in some cases.

Now that I'm done with graduate school, I can confess that I was supposed to read a book on game theory for my qualifying exams, but I never made it past the first couple of chapters. Thus, I'm a bit out of my depth here.
posted by deanc at 10:09 AM on May 30, 2007

I agree with deanc that the blinkered me-vs-other bidder game theoretic approach is the downfall here. If you cast it as me-vs-pilot, with the other player a completely irrational lunatic who's likely to spit out any number whatever, I make the expected payoff for a \$2 bid to be \$3.98 and the expected payoff for a \$100 bid to be \$49.52.
posted by ormondsacker at 10:12 AM on May 30, 2007

I think the "rationality" here assumes that we're all motivated by money over anything else. If I'm more interested in being a socialized human being than an extra buck or two, then choosing \$100 is perfectly rational.

If I choose \$100 and you choose \$100, then we both get \$100 yey.

If I choose \$100 and you choose \$99 just to get an extra buck, I roll my eyes at how much of a dick you are, but at least I get \$97.

If I choose \$100 and you choose \$2, I marvel at how you're not only a dick, you're a self-defeating dick.

In all cases, I've reached my goal of being the person I want to be, so I'm perfectly rational. Money on this scale is irrelevant to that. You might as well offer me fifty bucks to punch a puppy and then declare me "irrational" because I didn't take it.
posted by L. Fitzgerald Sjoberg at 10:14 AM on May 30, 2007 [3 favorites]

You guys! You obviously can't pick \$100. It says any integer between 2 and 100. So you can pick 3 up to and including 99.

Sheesh.
posted by ODiV at 10:15 AM on May 30, 2007 [2 favorites]

I'm more interested in the version that pays between \$2 and \$1,000,000.

I know you're trying to be funny, but the dynamics aren't any different than the version that pays between \$2 and \$100. What if there were a version that pays between \$100,000 and \$1,000,000? What if you were penalized \$90,900 for guessing too high, and you could only play once? Then what would you do? The guarantee of \$100,000 is worth a lot more to me than the possibility of getting \$1,000,000 given a risk of getting only \$100. People with the net worth of high-level investment bankers may make different decisions, though.
posted by deanc at 10:18 AM on May 30, 2007

cortex: Here's the thing: the theory doesn't serve this contrived example well, and the realm of ways in which game theoretic models end up at odds with real human behavior is a fascinating one, but the example can be put into a context where it does make sense.

But the fascinating thing that this article highlights is that some professional game theorists (3 from a sample of 50) playing this game against other professional game theorists, would actually bid \$2.

I think most people would accept that the narrowly-rational strategy could make sense in some specific circumstances (playing poker, for example), but what is astonishing is that apparently some people who do this for a living don't realise that it doesn't work outside those circumstances.
posted by MetaMonkey at 10:21 AM on May 30, 2007

Who really needs that extra few dollars by playing this game asshole style? 100 works because people operate on a social level: "Will this guy beat the crap out of me if I pick 2?" On an emotional level "100 bucks and we all win. yah!"

The only person I can think of who would really think an extra 20 is worth it is probably too dumb to figure out the math to play properly.

The problem with these games is that there's nothing to lose. You didnt invest to play this and the monetary amounts are too small to make most people get all hot and bothered about the chance to get 20-30% more. The success of showsl ike Deal or no Deal is the fact that its such a big fantasy you might as well keep going to the top. I would stop at a comfortable 10 or 20k because, hey, thats free money and after that point comes a whole lot of risk.
posted by damn dirty ape at 10:21 AM on May 30, 2007

or perhaps you will, after 10 rounds enter an auction with other players for something of value that you can buy with your monopoly money?
posted by Megafly at 10:26 AM on May 30, 2007

But the fascinating thing that this article highlights is that some professional game theorists (3 from a sample of 50) playing this game against other professional game theorists, would actually bid \$2.

It's fascinating to me that only 6% of a group composed of people so thoroughly grounded in the literature would make such a move: the stakes are (presumably) low, since professional game theorists probably aren't living on the street; they know the context well enough to anticipate a wide variety of play from the other folks; and their choice of \$2 could be spawned from stubborn self-aware loyalty to the model/principles, to mischieviousness, to value-neutral point-making, etc.
posted by cortex at 10:28 AM on May 30, 2007

Yeah, I think this game's flawed, not the theory.
It doesn't state the goals. It appears to be a one-shot choice.
And it looks like it was designed specifically to make game theory look bad.

I like Jon Mitchell's game much better.
posted by MtDewd at 10:28 AM on May 30, 2007

I'm with most of the above. The puzzle assumes that it's better to come out ahead of the other player than to get a bunch of money. I'd rather get a bunch of money, even if it's only \$97, or whatever, than lowball and end up with \$7 or something lame like that.

Not terribly difficult, imho.
posted by HighTechUnderpants at 10:29 AM on May 30, 2007

The success of showsl ike Deal or no Deal is the fact that its such a big fantasy you might as well keep going to the top. I would stop at a comfortable 10 or 20k because, hey, thats free money and after that point comes a whole lot of risk.

I wonder about that. Jeopardy (tried to) screen out the superbrains along with the mediocre; does Deal or No Deal screen out the modest-return pragmatists?
posted by cortex at 10:30 AM on May 30, 2007

100 would be the obvious choce, and hope the other person has the damn common sense to do the same.. it's the only way to get the largest amount of money for both of you! I see the screwy 'logic' behind choosing 2... but that only gives you a maximum possible reward of \$4. Not very logical, IMO.
posted by triolus at 10:31 AM on May 30, 2007

I prefer Phillipe's game.
posted by L. Fitzgerald Sjoberg at 10:33 AM on May 30, 2007

Man, if I chose \$100 and my partner chose \$99, I'd be like, "Congratulations. You are willing to look like a dick for the princely sum of one dollar."

I've done plenty of things for free which made me look like a dick. The extra \$1 is just icing on the cake.
posted by DevilsAdvocate at 10:35 AM on May 30, 2007 [2 favorites]

I'm more interested in the version that pays between \$2 and \$1,000,000.

I think that version is better known as "Deal or No Deal."

does Deal or No Deal screen out the modest-return pragmatists?

From what I've heard about the contestant selection "test" - a series of situations with "would you take this offer?" - it seems the answer is yes.
posted by djlynch at 10:37 AM on May 30, 2007 [1 favorite]

Sorry, haven't gotten through all the comments yet hope I'm not being completely redundant...

The article states that they apply backwards induction. Which means you start with the last action of the game and work backwards. So, you start with \$100 then work back and find the best action in response for the other person is to choose \$99. But if the reaction to that is to adjust the original \$100 to \$98, which means your maximum winning is still only \$100. So you have NOT improved at all and only at best hurt your opponent. Ergo, induction fails and other logic must be applied (say expectation values).

Thus with this scenario, game theory would NOT lead to a choice of \$2. Interestingly, if they upped the difference in values to \$3 then backwards induction would actually work as stated sending the value down...I don't know enough about game theory to know if you're ONLY supposed to apply backwards induction though and I somehow doubt it.
posted by kigpig at 10:39 AM on May 30, 2007

If the dilemma is expanded to an iterated traveler's dilemma, Tit for Tat is very nearly an optimum strategy, in which you'd choose \$100 until the other guy chose \$2 in which case you'd fall back to \$2 as long as you were playing him. For the single case though, it all depends on the individuals playing and their own personal risk level. As with any theory a situation can be developed that makes the theory look dumb. Game theory is applicable (and effectively so) in a wide range of areas, but it does not purport to be able to accurately predict all human responses in all possible game-like areas.
posted by Skorgu at 10:39 AM on May 30, 2007

\$100

The criticisms of game theory are off the mark. Game theory doesn't factor in psychology, that's not a flaw. The intent is to give the optimal strategy for winning according to the parameters of the contest.

Game theory does have some relevance for psychology in showing how human behavior departs from the rational solution. Once those instances have been identified then some guess can be made about human motivations and cognitive biases. But game theory certainly isn't wrong and humans don't give a better answer.

In this case it appears that humans value a personal monetary reward over the possibility of victory over an opponent. Another interpretation is that humans recognize their opponents are less than rational.

Whether the game is played once or multiple times makes no difference. The strategy is the same. An optimal solution does not change because of an increase in the number of trials. If you have a human opponent and know the guy is liable to change for an irrational reason then you can make a counter strategy reflecting that. But you could also build that 'read' of your opponent into the question and game theory would give you the optimal solution. But if you don't know that your opponent will change, you keep the same strategy. It's like flipping a coin for all your money if you lose vs. 10 times the amount if you win. If you chose to play at all you might say you would only do it one time but wouldn't commit to a series of 5. That's fine but recognize that when you do so there are assumptions, perhaps unconscious, about how much that windfall is worth to you at your current level of wealth and how you feel about the 50% possibility of going broke vs. the 3% possibility of going broke. If those assumptions were built into the equation, game theory would come up with the same solution. But without that specification the assumption going in is that money doesn't change value based on how much you have. For humans that generally doesn't hold true. People are more likely to bet everything when they are only worth \$3,000 vs. \$3,000,000 even if the proportional reward is the same.

Game theory does not lack in comparing risk, reward and assumed goal. That is its forte.

Finally, I think one of the proposed game theory solutions to the cold war was to preemptively bomb the Soviet Union before they could catch up to us enough in nuclear weapons to really hurt us. Whether that is the optimal solution depends, like all other game theory questions, on whether you agree with the assigned relative weights of victory, defeat and stalemate.
posted by BigSky at 10:41 AM on May 30, 2007 [1 favorite]

The puzzle assumes that it's better to come out ahead of the other player than to get a bunch of money.

No, it doesn't, and the classical game theory argument which leads to choosing 2 does not rely on any assumption of a desire to "beat" the other player, and it does assume that each player wishes to maximize his own profit, regardless of what the other player gets.

There's plenty of reasons to trash classical game theory here, but please at least understand the classical argument so you don't go off and attack a different argument entirely, which no one, not even classical game theorists, is making.
posted by DevilsAdvocate at 10:42 AM on May 30, 2007

cortex: It's fascinating to me that only 6% of a group composed of people so thoroughly grounded in the literature would make such a move

Really? That 6% of people who are paid to figure out how to win games would make a bid that could not possibly win does not surprise you? It makes about as much sense to me as if you held a competition of 100 chefs to see who can make the best dish, and 6 of them shat on the plate and went home (even if shitting on a plate may be optimal in a very specific context). Wouldn't professional pride motivate you not to end up at the bottom of the table (as those 3 \$2 dudes will have done).
posted by MetaMonkey at 10:42 AM on May 30, 2007 [2 favorites]

chiming in with the 100 people. this is an opportunity to cooperate with the other player in your position to take the fool asking the question for as much money as you can. once you realize this, you don't need no game theory.
posted by bruce at 10:46 AM on May 30, 2007

Traveler's Dilemma (TD) achieves those goals because the game's logic dictates that 2 is the best option, yet most people pick 100 or a number close to 100

Two is the best option? WTF? How does that even remotely make sense? If you pick two, the most you can hope for is \$4, but if you pick \$100, you might get \$100. In fact, you probably will. The only thing "illogical" is that the author of the article is apparently a moron.
posted by delmoi at 10:49 AM on May 30, 2007

So, you start with \$100 then work back and find the best action in response for the other person is to choose \$99. But if the reaction to that is to adjust the original \$100 to \$98, which means your maximum winning is still only \$100. So you have NOT improved at all and only at best hurt your opponent. Ergo, induction fails and other logic must be applied (say expectation values).

But this is incorrect. If you choose 98 rather than 100, and you know for a fact that your opponent will always choose a higher number than you if possible, sure—but you can't possibly know that in the problem as presented.

The rational goal, in the jargon sense of the word, is not to improve your maximum payout, it is to maximize your predicted payout vs. a similarly rational opponent. Your predicted payout has to take into account the payout for either a win or a loss, and the probability that each case will occur.

If you choose anything other than \$2, so the reasoning goes, your oppenent can underbid you. If that happens, you incur a loss in your predicted payout because of your choice of bids. That is why induction leads you both down to the bottom of the limit. At \$2, you are sure to take \$2 against a like-minded rational opponent, and \$4 against an opponent who is not rational. At anything other than \$2, you are risking your bid vs. either \$0 if your opponent is rational, or ??? if your opponent is not.

Modelling the rationality itself of your opponent is not within the scope of the model.

The model clearly doesn't capture actual human behavior here, but it's a failure of application, not a failure of the model logic.
posted by cortex at 10:49 AM on May 30, 2007 [1 favorite]

That 6% of people who are paid to figure out how to win games would make a bid that could not possibly win does not surprise you?

You have made the unwarranted assumption here that receiving \$2 (and very likely \$4) is "not winning." The game allows a wide variety of possible outcomes, from \$0 to \$101, and there is no clear dividing line between "winning" and "not winning." It is your interpretation, not part of the game setup, that getting only \$2 would be "not winning."

I'm not saying I would pick 2 - after reading the thread so far, I'd probably pick 99 - but if I were given the opportunity to participate in a game/experiment which took only a few seconds of my time, and came away \$2 richer than I was before, I wouldn't consider that losing.
posted by DevilsAdvocate at 10:52 AM on May 30, 2007

The logic of the game looks much more sensible if, instead of dollars, you are playing for nuclear missiles (which the guys who developed this math were).
posted by localroger at 10:53 AM on May 30, 2007 [2 favorites]

I choose ten. I just like the number 10, it's kind of lucky for me. So take your \$8 and get off my case, ok?

(Or should I have chosen 8, so I'd win 10 when you pick 100? Math is hard)
posted by Crash at 10:53 AM on May 30, 2007

Really? That 6% of people who are paid to figure out how to win games would make a bid that could not possibly win does not surprise you? It makes about as much sense to me as if you held a competition of 100 chefs to see who can make the best dish, and 6 of them shat on the plate and went home (even if shitting on a plate may be optimal in a very specific context). Wouldn't professional pride motivate you not to end up at the bottom of the table (as those 3 \$2 dudes will have done).

You can reduce 'professional pride' to the monomaniacal goal to Get The Most Money in a deeply omphaloskeptic exercise? That's like saying professional pride would motivate any working artist to paint as strictly realistic a portrait as they could if someone gave them the lark instructions to Paint The Queen.

It does surprise me, yes.
posted by cortex at 10:55 AM on May 30, 2007

In a classroom, ask your students how far they would go in betting on the Martingale. Most would probably go as high as \$8 or \$16.

But if you look at the math, the longer you play the Martingale (assuming you're really rich and can keep playing), the exponential payoff grows to be astronomical.

What's interesting is that human psychology does not deal with mathematical situations in a rational manner — we're particularly bad at rational assessments of probabilistic models in gambling, game theory, and other non-zero strategic behaviors.

Just as game theorists would bet \$2 in empath's game, while regular folks would bet \$100, those same regular folks might stop betting against the Martingale after about \$8-\$16 in losses, but knowledgable (and rich) mathematicians would probably keep betting higher amounts — knowing that the long-term payoff is worth it.

The interesting question is why we're wired to ignore the odds, or why we're wired to be so bad at it. Is it smarter for the population to be dumb about risk assessment, if the payoff is large for a few individuals?
posted by Blazecock Pileon at 10:56 AM on May 30, 2007

DevilsAdvocate, I direct you to TFA:
51 members of the Game Theory Society, virtually all of whom are professional game theorists, played the original 2-to-100 version of TD. They played against each of their 50 opponents by selecting a strategy and sending it to the researchers. The strategy could be a single number to use in every game or a selection of numbers and how often to use each of them. The game had a real-money reward system: the experimenters would select one player at random to win \$20 multiplied by that player's average payoff in the game. As it turned out, the winner, who had an average payoff of \$85, earned \$1,700.
posted by MetaMonkey at 10:56 AM on May 30, 2007

I took an entire semester on game theory and I'm relying on common sense here, not choice matrices. The increments are so small here that you'd be an idiot not to choose the highest number possible. The reason Deal or No Deal works is because you are dealing with sums ranging from a penny to a million dollars.

But that said, the terms in the article are INCREDIBLY significant changing your logic versus the way it's presented in the FPP: for the latter you're setting it up as a game, while the article sets up a scenario where an actual item of tangible value is what is being evaluated. Only one of these sets up the dilemma as a test of morals as well as logic skill.

In fact, with asking for a number in the context of an actual item's value in play, the dilemma seems even simpler to solve. Unless you believe your partner will say the artifact was worth LESS than it actually was, then your maximum threat of loss is two dollars.
posted by XQUZYPHYR at 10:56 AM on May 30, 2007 [1 favorite]

posted by Caviar at 10:57 AM on May 30, 2007 [4 favorites]

"the classical game theory argument which leads to choosing 2 does not rely on any assumption of a desire to "beat" the other player, and it does assume that each player wishes to maximize his own profit, regardless of what the other player gets."

Yup. That's it. Change my third paragraph to:

One interpretation here is that humans recognize a like minded opponent, one who is more hopeful about taking home a bunch, rather than devoted to making sure he does as well as possible against someone who is also rationally pursuing their own self interest.
posted by BigSky at 10:59 AM on May 30, 2007

Blazecock Pileon, I once watched an actual human being playing with real money run out a Martingale starting at 25 cents and finally running out the table limit of \$1000. He was betting Red. I think he lost 13 bets in a row. On the very next spin after he ran out of money, his color came in.

The Martingale is basically a self-made negative lottery; you'll probably win a little, but if you lose you will lose a lot.
posted by localroger at 11:00 AM on May 30, 2007 [2 favorites]

Ok, here is his explanation:
When studying a payoff matrix, game theorists rely most often on the Nash equilibrium, named after John F. Nash, Jr., of Princeton University. (Russell Crowe portrayed Nash in the movie A Beautiful Mind.) A Nash equilibrium is an outcome from which no player can do better by deviating unilaterally. Consider the outcome (100, 100) in TD (the first number is Lucy's choice, and the second is Pete's). If Lucy alters her selection to 99, the outcome will be (99, 100), and she will earn \$101. Because Lucy is better off by this change, the outcome (100, 100) is not a Nash equilibrium.
But the problem is, once Lucy gets to 98, she has no reason to continue lowering her estimate. She if she continued to reduce her estimate, she would get less money then if she picked \$100 and her opponent picked \$99. So (100,99) is a better outcome for her then (97,98). Only the first three values 100, 99, or 98 make any sense to choose. If this guy thinks the math works out to \$2, he's doing the math wrong.
posted by delmoi at 11:03 AM on May 30, 2007 [1 favorite]

I write \$100. You write \$99. Those are close enough to being the same price. Wouldn't the insurance guy just assume I was rounding up? And who says we bought the identical antiques in the same store?
posted by yeti at 11:03 AM on May 30, 2007

DevilsAdvocate, I direct you to TFA

The part you quoted has absolutely no relevance to the point I was making. If the person who had been randomly selected had only had an average payout of \$10, would the fact that he gained \$200 make him a "loser?" Perhaps in your eyes it would, but that's your interpretation, not part of the game, and there's no particular reason why game theorists should share your subjective dividing line between "winning" and "losing" in this game where you are virtually guaranteed some profit.
posted by DevilsAdvocate at 11:06 AM on May 30, 2007

I once watched an actual human being playing with real money run out a Martingale starting at 25 cents and finally running out the table limit of \$1000.

Of course, Martingale is part of the reason you have table limits. Without that, someone with an unrestricted bankroll could beat the house every time.
posted by cortex at 11:06 AM on May 30, 2007

"The interesting question is why we're wired to ignore the odds, or why we're wired to be so bad at it. Is it smarter for the population to be dumb about risk assessment, if the payoff is large for a few individuals?"

Because the money does not maintain the same level of value.

The money or points in these games is just an abstraction. For us it has utility. We aren't necessarily dumb about risk assessment, although we might be. But it's that we aren't measuring the value according to how it is perceived by the holder. We don't feel the same way about spending 5% of our annual income when it is 14,000 vs 280,000.

Assuming you could play the Martingale until you reached a win, the value of the win does not exponentially grow, it stays at the value of the original wager. It is the amount wagered on each step that grows exponentially.

The Martingale is not a negative lottery, it has positive expectation.
posted by BigSky at 11:08 AM on May 30, 2007

I do agree with both DevilsAdvocate and cortex that since the game does not specify the conditions of winning, their bids are irrelevant. I had assumed maximising money to be the clear implicit goal. But in playing a game without an onbjective, what is the point of considering rationality whatsoever? They may as well pick their birthday.

You can reduce 'professional pride' to the monomaniacal goal to Get The Most Money in a deeply omphaloskeptic exercise? That's like saying professional pride would motivate any working artist to paint as strictly realistic a portrait as they could if someone gave them the lark instructions to Paint The Queen.

I disagree, I believe most artists would assume the goal is to paint a picture that is in line with the preferences of themselves and the people they care about.
posted by MetaMonkey at 11:09 AM on May 30, 2007

Only the first three values 100, 99, or 98 make any sense to choose. If this guy thinks the math works out to \$2, he's doing the math wrong.

What if your opponent picks 97? How do any of 100, 99, or 98 then make sense? What if he picks 50? What if he picks 2?

You don't get to pick one static opponent choice: you have to evaluate his reaction to your choice, and iterate until it stabilizes. Whether that's how you and your opponent would truly analyze the situation is up for debate, but the math is fine.
posted by cortex at 11:09 AM on May 30, 2007

I disagree, I believe most artists would assume the goal is to paint a picture that is in line with the preferences of themselves and the people they care about.

And likewise game theorists. Much as this article paints a dumb-as-rocks portrait of the field for rhetorical effect, game theorists are not dead-eyed robots.
posted by cortex at 11:12 AM on May 30, 2007

Blazecock Pileon, I once watched an actual human being playing with real money run out a Martingale starting at 25 cents and finally running out the table limit of \$1000. He was betting Red.

Of course it could happen. And the stupid thing about that "Algorithm" is that the most you can win is the first bet (in this case 25¢) and the most you can loose is, erm, infinite.

Of course, Martingale is part of the reason you have table limits. Without that, someone with an unrestricted bankroll could beat the house every time.

That's not true either. If someone won, say \$10,000,000 they would have already paid the casino \$999,999.75. And, because there is one spot that's not red or black, the probability is not actually even 50/50. Even with a billion dollar bankroll you still would only get 32 spins.
posted by delmoi at 11:12 AM on May 30, 2007

But the problem is, once Lucy gets to 98, she has no reason to continue lowering her estimate. She if she continued to reduce her estimate, she would get less money then if she picked \$100 and her opponent picked \$99. So (100,99) is a better outcome for her then (97,98).

The Nash equilibrium is a point where neither player can improve by choosing something else. It's true that Lucy can't improve from (99,100) by changing her number, but Pete can, by changing his 100 to 98. (99,100) only gives Pete \$97, but (99,98) gives Pete \$100. Likewise, (99,98) isn't a Nash equilibrium, because Lucy can do better against Pete's 98, by choosing 97 instead of 99. etc., etc., until you get to (2,2), and neither player can improve their position by choosing something else.

(Not defending the "2" choice, just explaining how game theory arrives at that conclusion.)
posted by DevilsAdvocate at 11:12 AM on May 30, 2007 [1 favorite]

THIS IS GAME THEORY!

Quote: "At the end of the war, if there are two Americans and one Russian, we win!"

General Thomas Power, commander in chief of the Strategic Air Command from 1957 to 1964 and Director of the Joint Strategic Target Planning Staff from 1960 to 1964, ranked near the top of the U.S. Armed Forces waging the Cold War.
posted by yoyo_nyc at 11:13 AM on May 30, 2007 [1 favorite]

No clearly defined goal = no basis for determining rationality.
posted by yesster at 11:17 AM on May 30, 2007

I had assumed maximising money to be the clear implicit goal.

"maximising money" is not well-defined when you have a game with many possible outcomes. Do you mean 1) maximizing the highest possible payout you could potentially receive? 2) maximizing the lowest possible payout you could potentially receive? 3) maximizing the average payout? 4) maximizing yet some other function of the various possible payouts and their probabilities?

The person who seeks to maximize the highest possible payout plays the lottery; the person who seeks to maximize the average, or the lowest payout, does not. Which one of these is "maximising money?"

In fact, it's worth noting that if your goal is to maximize the lowest possible payout--that is, to maximize the amount you are absolutely guaranteed to get, regardless of what the other player does - 2 is the correct choice!
posted by DevilsAdvocate at 11:19 AM on May 30, 2007

What if your opponent picks 97? How do any of 100, 99, or 98 then make sense? What if he picks 50? What if he picks 2?

If he picks \$97, I'd get \$95. As you know, \$95 is more then \$2. So picking \$100, 98, or 99 still make far more sense then picking \$2. Same as if he picks \$50. You get \$48, which is still more then \$2.

Think about this way. Let's say you were going to play the game against 10 different opponents, who will go on to pick \$100, \$50, \$2, \$99, \$98, \$100, \$100, \$2, \$8, and \$20.

If you always pick \$100, you'll make \$564. If you pick \$2 every time, you'll make \$34. So, picking a high number is better then picking a low number, because it gets you more money.

I think, as someone else said, the "game" here isn't to get more money then your "opponent" it's to maximize your own return, you're playing against the airline, not the other passenger.
posted by delmoi at 11:20 AM on May 30, 2007

Point taken cortex, if you are suggesting the \$2 bidders were doing it for in-joke japery. I had assumed the game to be a reward-maximisation competition, since the job of game-theorists is reward-maximisation, assuming the reward is well defined (which in this game, it is). If those 3 game-theorists are playing the meta-game of supra-game-utility maximisation, and having a laugh is of greatest utility to them, then fine. But as yesster says, if there is no goal, there is no rationality, and in fact no game, rendering all this pointless.
posted by MetaMonkey at 11:22 AM on May 30, 2007

"maximising money" is not well-defined when you have a game with many possible outcomes. Do you mean 1) maximizing the highest possible payout you could potentially receive? 2) maximizing the lowest possible payout you could potentially receive? 3) maximizing the average payout? 4) maximizing yet some other function of the various possible payouts and their probabilities?

The goal is to maximize the amount of money you actually make after the game is over. Anything less then \$98, and you're not doing that.
posted by delmoi at 11:23 AM on May 30, 2007

Think about this way. Let's say you were going to play the game against 10 different opponents, who will go on to pick \$100, \$50, \$2, \$99, \$98, \$100, \$100, \$2, \$8, and \$20.

But that's thinking about a different problem than the one presented. If you could know in advance what your opponents would pick—or even just the average value of their individually unspecified picks—you would have more information than you have in the problem we're dealing with.
posted by cortex at 11:23 AM on May 30, 2007

VIZZINI: But it's so simple. All I have to do is divine it from what I know of you. Are you the sort of man who would put the poison into his own goblet or his enemies? Now, a clever man would put the poison into his own goblet because he would know that only a great fool would reach for what he was given. I am not a great fool so I can clearly not choose the wine in front of you ... But you must have known I was not a great fool; you would have counted on it, so I can clearly not choose the wine in front of me.

VIZZINI: [Happily] Not remotely! Because Iocaine comes from Australia. As everyone knows, Australia is entirely peopled with criminals. And criminals are used to having people not trust them, as you are not trusted by me. So, I can clearly not choose the wine in front of you.

THE MAN IN BLACK: Truly, you have a dizzying intellect.

VIZZINI: Wait 'til I get going!! ... [via]
posted by mosk at 11:29 AM on May 30, 2007 [2 favorites]

DevilsAdvocate, I understand and agree with your logic, but in the game in question there is a specific reward; a multiple of your average. If this isn't the goal, and there is no well defined goal, there isn't much of a game.
posted by MetaMonkey at 11:29 AM on May 30, 2007

And the stupid thing about that "Algorithm" is that the most you can win is the first bet (in this case 25¢) and the most you can loose is, erm, infinite.

You can choose to win anything you want. It would be foolish to only bet 25 cents over your losses. Betting more does hit the limit faster of course.
posted by smackfu at 11:30 AM on May 30, 2007

I do agree with both DevilsAdvocate and cortex that since the game does not specify the conditions of winning

Also, you completely missed my point. You seem to have the notion that there must be some value X, and if you gain \$X or more, that's "winning," and if you gain less than \$X, that's "losing."

You are misunderstanding the concept of a "game" in the sense that game theorists use the term. You may be familiar with games such as Risk, or Monopoly, or baseball, where there's one or more "winners" and one or more "losers."

The possible outcomes of games that game theorists deal with are not necessarily "winning" or "losing." This game has 102 possible outcomes for each player: get nothing; get \$1; get \$2; get \$3; ... get \$100; get \$101.

The point I was making was not "oh, you can define winning however you like and then you can do anything you like and any attempt to analyze 'rationality' goes out the window," as you seem to think. I was objecting to your attempt to reduce a game with 102 possible outcomes for each player - with a clear, well-defined preference ordering - to a simple binary "win-or-lose" scenario.
posted by DevilsAdvocate at 11:31 AM on May 30, 2007

But that's thinking about a different problem than the one presented. If you could know in advance what your opponents would pick—or even just the average value of their individually unspecified picks—you would have more information than you have in the problem we're dealing with.

What do you mean? We can't even guess? If I was going to guess, I would guess that they would pick the same number that I would in order to maximize their payoff just as I would pick a number to maximize mine. So, I would guess that they would pick 98, 99, or 100. I would naturally assume they would think the same way I would. I don't see any reason to pick anything else.

In fact, assuming the null hypothesis, which is that all choices from \$0 to \$100, then actually \$100 would still be the best choice. The only way \$2 makes sense if you do have some prior information, namely information that they'll always pick two.

With a "normal human" model, \$100 is the best answer. With a null hypothesis, you still pick \$100. Only if you use the bizarre, broken model that the author came up with is \$2 the best choice.
posted by delmoi at 11:34 AM on May 30, 2007

You seem to have the notion that there must be some value X, and if you gain \$X or more, that's "winning," and if you gain less than \$X, that's "losing."

No, my notion is that winning is reward-maximisation.
posted by MetaMonkey at 11:42 AM on May 30, 2007

Fascinating. I thought about why I instinctively go for \$100 and what I came up with was this: My motivation was to maximize the total payout by the airline guy, not to beat the other person. If we both choose \$100, the total payout is \$200. If one of us chooses \$99, the total payout is now only \$198. And so on. The total payout of this game is always (low bid * 2), so both choosing the highest allowed bid is naturally the only way to maximize total winnings.

I was surprised the article didn't mention this as a theory for why people tend to choose high in this game. It's the rational choice if you are able to imagine you and the other player as one unit, rather than pure competing individuals. Given the social nature of humanity, this seems like it would be a natural strategy for most people. If I come up with a name for this principle, can I have it? Like "Rusty's Law of Collaborative Airline Screwage"?
posted by rusty at 11:43 AM on May 30, 2007 [3 favorites]

Only if you use the bizarre, broken model that the author came up with is \$2 the best choice.

But the model is not bizarre or broken, it's just badly applied. I agree—pretty much everyone in this thread agrees—that actualy humans put to this trial in a realistic setting would pick something other than \$2, but that says nothing about the soundness of the model when applied to situations better fit to it.

Someone mentioned replacing dollars with nuclear weapons. How does that change the answer? It's less inflexible than my medicine example: the other guy having 1+ nukes is not instant death for you, just a stand-off. Still and all, how does 2 and 2 compare to 0 and n? Is there a sum total of nukes you could potentially have that would make it worth it for you to also potentially end up with none against an armed opponenent?
posted by cortex at 11:45 AM on May 30, 2007

100. 100, 100, 100!

So... anyone want to sign up for a game-theory trial with me?
posted by bicyclefish at 11:47 AM on May 30, 2007

I thought this was a pretty interesting piece. Of course the point of the mathematics is not

"you should pick \$2 or you're stupid,"

but

"standard models of rational behavior don't adequately reflect some aspects of the way real people make decisions -- how do we deal with this?"

I don't think the answer to that question is "junk the idea of thinking mathematically about human behavior."

And for people who think it's obvious that \$100 is the right bid and \$2 an insane bid -- do ask yourself why, if maximizing the money you take home is the goal, you would bid \$100 rather than \$99. The latter results in either more money or the same amount for you no matter what the other player does. If your answer is "the extra \$2 I might make doesn't mean anything to me," then keep in mind that there are lots of real-life decisions made by economic actors in which a 2% difference in revenue really does mean something.

(Not defending the \$2 bid, which I think people are perfectly right not to make -- just trying to emphasize that there really is some serious content here.)
posted by escabeche at 11:48 AM on May 30, 2007

At anything other than \$2, you are risking your bid vs. either \$0 if your opponent is rational, or ??? if your opponent is not.

I don't quite agree. The assumption required to decide on \$2 is not that your opponent is rational, it's that your opponent is following this particular theory, which in this case is clearly wrong. Seems to me that in order to prove this theory rational, you have to assume that it's rational.
posted by sfenders at 11:50 AM on May 30, 2007

If you can play an unlimited number of times, \$2 is the rational choice.

You keep doing that, I'll come along, say \$1 and get \$3, and you'll lose \$1.
posted by oaf at 11:56 AM on May 30, 2007

'Rational', in this case, is jargon rather than lay terminology, which is partly what the article is getting at and partly what the article kind of misses in it's presentation.

No one is saying it's rational in the sense that it characterizes correctly what a cross-section of reasonably intelligent people would do, and that's why the model is such a bad fit for this proposition. But if you want to create a model that actual produces a decision, you have to rigorously define good vs. bad choices somehow, and to do so you create an artificial system of values and call the correct maximization of results in that system 'rational'.

It's a problem to suggest that because your game theoretical model says something is 'rational' that it's what reasonable, intelligent people will do. But it's just as much a problem to say that because the practical and technical definitions of 'rational' clash, the model is useless. In either direction it's a bad conflation of systems.
posted by cortex at 11:56 AM on May 30, 2007

oaf, \$2 is the minimum in the stated problem.
posted by cortex at 11:58 AM on May 30, 2007

You don't get to pick one static opponent choice: you have to evaluate his reaction to your choice, and iterate until it stabilizes.

Nope, you play once. That's the key to this game, and what sets it apart from most game theory exercises. In the prisoner's dilemma, its crucial that the game be iterative, and that the players not know how many rounds will be played, because if they do know, the game breaks, and the only rational action ever is to defect. Think about it- say that you know that the game will have 5 turns. The first four turns have passed (the outcomes don't matter), and you're now on the last round. You now have no hope of influencing your opponent's behavior anymore, because the game is about to end- the only thing that will affect your score is the outcome of this turn. And in PD, the best choice in any given turn is to defect, regardless of what your opponent does. So you'll defect (and your opponent will too). Actually imagine this scenario- if you knew you were in the last turn of PD game, what would you do? Now, roll back one turn. Both of you, being rational, know that on the next turn, you're both going to defect, so that turn is now fixed, and the only real choice you have is what you'll do on this turn, turn four. And the same thing plays out. The only way to make the game work is to make the number of turns unknown. And this holds for most iterative game theory exercises- if the number of turns is known by both players, the incentive to cooperate disappears.

In fact, I'm not sure I'm convinced that this game differs in substance from a one turn PD game. PD is defined by outcomes of the following ranking: Defect when partner cooperates > Both cooperate > Both defect > Cooperate when partner defects. If you define cooperating as choosing 100, and defecting as choosing 99, then this pattern holds (101 > 100 > 99 > 97). The backwards reasoning described in the article actually feels so much like iterative PD that I wonder if by letting the players choose any integer less than 100 (literally), the normal behavior would reassert itself, and both choosing 100 would become a rational strategy.
posted by gsteff at 11:58 AM on May 30, 2007

"Thinking mathematically about human behavior can be useful, and sufficiently complex models of rational behavior can reflect some aspects of the way real people make decisions, but this problem as presented is just fricking stupid."

The reason most people pick something other than \$2 isn't because most people are less-than-ideally rational. It's because this problem was poorly constructed. It completely fails to state what "winning" means. Different posters above have come up with completely plausible yet incompatible interpretations of winning.
posted by yesster at 11:59 AM on May 30, 2007

With a "normal human" model, \$100 is the best answer.

There have been days where \$2 made the difference between having a meal and not having a meal. Had I been playing the game on one of those days, I might well have chosen 2 with its guarantee of \$2, rather than anything higher, which might have given me a very good chance at \$100 or so, but also a small but non-zero risk of getting nothing. Was I not a "normal human" on those days?

No, my notion is that winning is reward-maximisation.

If the other player chooses 2 - which is unlikely, but not, as the article shows, unheard of - then choosing 2 yourself is reward-maximaztion, as anything else will get you nothing.

Now, if you want to say 99 (not 100!) maximizes the average reward against what humans typically play, I won't argue with that. What I am saying is that "the One And Only Goal is to maximize one's expected (in the mathematical sense of "expected") return" and "all definitions of goals are equally legitimate and any choice can be defended as rational" are not the only two possible ways to analyze this game.

One might, for example, consider the notion of utility - imagining that we could quantify how "useful" a given person would consider a given amount of money. We might agree, for example, that utility is monotonically increasing with money - everyone agrees that \$64 is better than \$63 - but not linear - people have different valuations of how much more useful \$64 is than \$63.

Given that different people have different utility functions, it's entirely rational that some people might choose 2, and some people might choose 99. At the same time, it's not saying "anything goes" - given the model of utility I described above, it's never rational to choose 100, as 99 always gets you at least as much money than 100, and sometimes more.

(If you want to take "not being a dick" into account, then 100 might be a rational choice, but then the utility model I described no longer applies.)
posted by DevilsAdvocate at 12:00 PM on May 30, 2007

Nope, you play once.

Yes, but you can evaluate the play to hell and back before you make your one play. You iterate your strategy in your head before you move, and that is where your analysis of your opponent's move keeps changing, unless (a) you're psychic, (b) you have extra information, or (c) you're braindead.
posted by cortex at 12:00 PM on May 30, 2007

It completely fails to state what "winning" means.

Again, games (in the game theoretical sense) don't have to have "winning" or "losing." If it helps, don't think of it as a "game," just think of it as a "situation." One with 102 possible outcomes, from gaining \$0 to gaining \$101.

Game theory may not be able to adequately describe what people either should or will do in this situation, but it's not because the conditions don't define "winners" and "losers."
posted by DevilsAdvocate at 12:04 PM on May 30, 2007

69

*air guitar solo*
posted by mrgrimm at 12:06 PM on May 30, 2007

100. 100, 100, 100!

So... anyone want to sign up for a game-theory trial with me?

Sure!

99. 99, 99, 99!

(And it's worth noting that I don't care whether I "beat" bicyclefish or not, I'm just trying to maximize my own payout.)
posted by DevilsAdvocate at 12:07 PM on May 30, 2007

(A very large number of people would also do the non-imaginative thing and just claim exactly what the antique actually cost them.)
posted by onlyconnect at 12:08 PM on May 30, 2007 [3 favorites]

But that's an egocentric view of the "situation." There are more than 102 possible outcomes for the situation, even if there are only 102 possible outcomes for me.
posted by yesster at 12:09 PM on May 30, 2007

the article sets up a scenario where an actual item of tangible value is what is being evaluated

That's kinda what threw me off this stupid "game," anyway. If I'm a god-fearing Christian man who doesn't lie, isn't my choice easy?

I tell the traveler exactly what I paid for it.
posted by mrgrimm at 12:09 PM on May 30, 2007 [3 favorites]

Gaming theory is concerned with winning games, not maximizing profits or scores. Think of it this way, if your choice was to pick a # between 2-100, and you're only goal is to have a higher value than a person picking against you, the correct answer is 2 (because you can't lose).

In the scenario presented, most people don't see it as a competition, but a chance at free money. In that case, you're trying to maximize your profit/score and we can throw game theory out.
posted by Crash at 12:11 PM on May 30, 2007

But that's an egocentric view of the "situation." There are more than 102 possible outcomes for the situation, even if there are only 102 possible outcomes for me.

Fair enough; my point still stands that any failure of game theory here is not due to the fact that the game does not define "winners" or "losers."

Gaming theory is concerned with winning games, not maximizing profits or scores.

Either a)"gaming theory" is something completely different than "game theory" (and we're talking about game theory here), or else b) you don't have the slightest clue what you're talking about.
posted by DevilsAdvocate at 12:16 PM on May 30, 2007

As a rule of thumb it seems to require extraordinary circumstances before it is reasonable to expect people will make deductions requiring of two or more steps (ie, A -> B -> C).

This is both why real people don't often act by backreasoning from Nash equilibrium and why the logic behind why a \$2 choice could be considered optimal seems so obscure.
posted by little miss manners at 12:21 PM on May 30, 2007 [2 favorites]

I thought the "goal" was to get compensated for your broken antique that you had purchased on your trip. I would write down a number a couple bucks more than what I paid for it because of the stupid rules that the untrusting airline manager with no sense of customer service has created. If the other person chooses a higher number we both get fairly compensated. If the other person chooses \$2, I get a little irate and say "Two dollars? Where the fuck did you buy yours??? First I have to deal with this airline dick and his stupid fucking game and now this? Two Fucking Dollars!? Are you serious!?" and so on. Then I walk away with nothing but a story to tell about two fucking assholes I met on the way back from my Pacific holiday. Or am I taking it too literally?
posted by DanielDManiel at 12:23 PM on May 30, 2007 [6 favorites]

jinx, mrgrimm (plus you don't need to be a Christian to tell the truth).
posted by onlyconnect at 12:23 PM on May 30, 2007 [1 favorite]

I love these things. I remember this from grad school, or at least one functionally equivalent to it.

It was amusing watching some of my peers, who were much smarter than me, turn themselves into pretzels over it. Those of you squawking about iterative versus non-iterative are still missing the point entirely.

\$100 is always the proper choice. Always.

This is simple, so enjoy:

You cannot know, nor can you intuit, what your opponent will play. Not from one round, not from 1000 rounds.

Remember, that there is only 1 answer that is EVER superior to \$100 ... and that is \$99, but ONLY when your opponent chooses \$100.

There are 99*99 possible combinations of choices between you and your opponent, and only in 1 of those 9,801 choices, is there a superior answer to \$100.

Therefore, probability wise, it is clearly in your best interest, iterative or non-iterative, to choose \$100, as you only have a 1 in 9,801 chance of being better off by choosing \$99.

Therefore, both agents, if smart, will choose \$100 every round.
posted by Ynoxas at 12:26 PM on May 30, 2007 [1 favorite]

Man, it must be a hard time for airline managers.
posted by robocop is bleeding at 12:27 PM on May 30, 2007

If the other player chooses 2 - which is unlikely, but not, as the article shows, unheard of - then choosing 2 yourself is reward-maximaztion, as anything else will get you nothing.

Nonetheless, in the game as it was played, the outcome of a bid of \$2 would have given you the lowest possible reward of all players. Ergo, the professional game-theorists that bid \$2 were either (i) very poor at maximising reward, (ii) playing a meta-game where reward was irrelevent or (iii) so desperate for cash they were would prefer the lowest assured reward (\$40).

So unless their were just pissing about, the dudes who bid \$2 are either fools or really strapped for cash.
posted by MetaMonkey at 12:30 PM on May 30, 2007

Greg Nog: Man, if I chose \$100 and my partner chose \$99, I'd be like, "Congratulations. You are willing to look like a dick for the princely sum of one dollar."

Actually around here it's \$5, but you know, whatever.
posted by 1f2frfbf at 12:30 PM on May 30, 2007 [2 favorites]

@MetaMonkey and others:

The game theorists are not "poorer at meta cognition" or any such thing.

When game theorists play the game, they also choose numbers much higher than the nash equilibrium.

The article is about the fact that they don't have a FORMAL SYSTEM to explain why, and all their current formal systems predict choosing the \$2.

And no-one has a formal system for explaining the inclination people have to pish-posh rational systems. But hey meet you at the poker tables. Although it's true that a talent for how to game "people-think" coupled with a strong knowledge of the game will edge out over game theory there, good game theorists totally crucify unschooled people who play by the sort of instincts that lead the majority of people to insist 100 is the only "rational" choice in this game.

Personally, I would choose 99 if it was going to be one game, and 100 on the first game if it was going to be multiple games. If the other player played anything other than 100 on the first game, I would play 2 until he also played 2 or 100 twice in a row. If he played 100 twice in a row I would start playing 100. If he started playing 2, I would play 2 for a while, than play 50 once, then 100 twice. If he responds by playing 100 to my second 100 fine, if not back to 2. I would tune this strategy on the fly according to how the other person was playing.

This strategy totally reflects my inclination to be a non-"cheater" and a punisher, which I have to struggle to repress in poker, because it doesn't work that well in poker.
posted by lastobelus at 12:30 PM on May 30, 2007 [3 favorites]

In fact, I'm not sure I'm convinced that this game differs in substance from a one turn PD game.

In the prisoner's dilemma, it is as you say. In this game on the other hand, delmoi has it right. Unlike "defecting" in PD, picking "2" here does not always improve your score. I'm not sure whether the most "correct" choice is 99, or 98; but it's certainly not 2. If random numbers are allowed, and I don't see why they shouldn't be, I'd bet on picking a random number near 99 as being the best choice. The only way to get 2 is to assume that the other player is exactly as stupid as you are. The presented logic is about as meaningful as those trick "proofs" that 1 is equal to 2.
posted by sfenders at 12:32 PM on May 30, 2007

Therefore, both agents, if smart, will choose \$100 every round.

That was my immediate thought, as well, until people starting talking about game theory.

If the object of the game is only to beat your "opponent," then the decision is much different than maximizing profit.

Regardless of what DevilsAdvocate says, the only way this "dilemma" is is even remotely interesting is if the goal is to make more money than the other player.

If that's the goal, then \$100 is the worst possible choice because you have zero chance of winning. And \$2 is indeed the right answer.

But who would want to play such a stupid game?
posted by mrgrimm at 12:33 PM on May 30, 2007 [1 favorite]

Imagine if your entire economy was based on this amount of money that you received, i.e., if you had heaps of \$100 lying around, then inflation would skyrocket, but if the GNP of you and your friend was < \$10 then you could buy much more with what you had. This is more realistic, and makes the desire to "win" of greater imperative.

Which would explain why upper class Americans would rather have Bush wreck the economy and cut their own taxes than make everyone - rich and poor - more well off.
posted by Space Coyote at 12:36 PM on May 30, 2007 [1 favorite]

Remember, that there is only 1 answer that is EVER superior to \$100 ... and that is \$99, but ONLY when your opponent chooses \$100.

No, 99 is also superior to 100 if your opponent chooses 99.

There are 99*99 possible combinations of choices between you and your opponent, and only in 1 of those 9,801 choices, is there a superior answer to \$100.

If you're considering all 9801 possibilities, you've already defined your own choice. There's 99 possibilities of the other player's choice to consider, not 9801. For 2 of those 99 choices, 99 is a superior choice to 100, and for the other 97, 99 is just as good as 100.
posted by DevilsAdvocate at 12:37 PM on May 30, 2007

the point of the mathematics is not

"you should pick \$2 or you're stupid,"

but

"standard models of rational behavior don't adequately reflect some aspects of the way real people make decisions -- how do we deal with this?"

Even this misses it, I think. The fact that people aren't always rational is hardly news. The interesting thing about this problem is that it shows "standard models of rational behavior" are incapable of being as successful as real humans. Almost like we've found a trick by which human players will always be able to beat chess computers, forever. Not exactly of course- we could easily program chess computers to assume that the opponent is slightly irrational, as success in this situation seems to require... perhaps we even do. But its still a little bit worrisome that an actor that has a complete understanding of the game and of mathematical probabilities seems to be doomed to a shitty result.

(Also, I should note that the guess I made about fixing the game by letting people choose any integer less than 100 makes no sense. The point of iteration in PD is to learn about your opponent's behavior, and there's still no opportunity for that here).
posted by gsteff at 12:38 PM on May 30, 2007

I think the supposed "rational" outcome only makes sense with the following additional rules: a) you get to see the opponent's pick and can adjust your choice accordingly, and b) you *must* pick the number that will get you the most money (with the current set of picks).

That way, if your opponent picks 100, it's a downward spiral:

"Wait a minute, if he picks 100, I gotta pick 99."
"Wait a minute, if he picks 99, I gotta pick 98."
"Wait a minute, if he picks 98, I gotta pick 97."
... and so on.
posted by sour cream at 12:42 PM on May 30, 2007

lastobelus, the reason I say that is because some professional game theorists actually bid \$2, as I've debated above with cortex and DevilsAdvocate.

I agree that game theory can be very handy when applied to things like international war, poker and corporate strategising, but outside those sort of specialised circumstances, not so much. As far as I can tell, a large number of practicing economists and game-theorists are unwilling to concede this point to a degree I cosider foolish and harmful.
posted by MetaMonkey at 12:42 PM on May 30, 2007

"standard models of rational behavior"

There is no standard model of rational behaviour. There's a Nash equilibrium, for example, which seems a perfectly good concept, but hardly worthy of being called the universal principle by which rational actors should live. These models are already very limited in the kinds of things they can cover; although some of life can be captured in simple games like this, the vast majority of it can't. So I see no reason to assume that there aren't also some simple mathematical games they can't adequately describe.
posted by sfenders at 12:52 PM on May 30, 2007

What if it were the same problem, but all the dollar amounts were multiplied by a million? So you choose between \$2,000,000 and \$100,000,000, in increments of a million dollars. Does your answer change?

Mine sure does. If that were the game, I would absolutely choose \$2,000,000 so that I'd be guaranteed of taking home some money rather than risking having no money. I wouldn't want to pick a higher amount and risk my opponent screwing my by picking something lower, and I especially wouldn't want to take the risk of them picking \$2,000,000 and thus leaving me empty-handed. So I'd pick \$2,000,000.

But wait—why's my answer different now than in the \$2-\$100 game? Hm...

— Think about that. I think you'll find game theory doesn't seem quite so stupid in this revised scenario. —
posted by Khalad at 12:53 PM on May 30, 2007 [3 favorites]

Those of you squawking about iterative versus non-iterative are still missing the point entirely...You cannot know, nor can you intuit, what your opponent will play. Not from one round, not from 1000 rounds...Remember, that there is only 1 answer that is EVER superior to \$100 ... and that is \$99, but ONLY when your opponent chooses \$100. Therefore, both agents, if smart, will choose \$100 every round.

This thread is hilarious. First, choosing 99 is always superior to choosing 100 in the non-iterative version. If the game is played iteratively, then 100 can be rational, depending on your opponent's strategy, but this is why the creator of this problem specifically chose the non-iterative version... its the one that creates these surprising results. As for your probability argument, the whole point behind game theory, the thing that distinguishes it from mere statistics, is that the possible actions of the other players don't (necessarily) have equal probabilities.

I'm really surprised by how many people keep trying to insist that the game theorist who created this problem got his math wrong, and that the rational choice is 100. Its very unlikely that we understand this better than he does- really. Explaining why game theory is unrealistic is fine, but asserting that the math is wrong is both silly and ironically proves the point of the exercise.
posted by gsteff at 12:54 PM on May 30, 2007 [3 favorites]

I said that choosing 99 is always superior to choosing 100 in the non-iterative version. Its actually never worse than choosing 100.
posted by gsteff at 12:58 PM on May 30, 2007

Regardless of what DevilsAdvocate says, the only way this "dilemma" is is even remotely interesting

I have not made any claims about whether this game, or any proposed variants of it, are "interesting."

I have attempted to correct a misperception that several people seem to have, i.e., that the failure of game theory to adequately explain this situation is because game theory only seeks to "beat" the other player, rather than maximizing one's own profit. If I haven't convinced you, perhaps the wikipedia article will, which notes in its very first sentence that game theory is about maximizing one's returns, and says nothing about beating an opponent.
posted by DevilsAdvocate at 1:03 PM on May 30, 2007

First, choosing 99 is always superior to choosing 100 in the non-iterative version.

...Unless both of you choose 100, which only carries risk if one player defects. 99 has less risk and has a higher payoff for individual players, but 100 carries a higher guaranteed payoff for both players. Depends on your metric of "superior", I guess.
posted by Blazecock Pileon at 1:05 PM on May 30, 2007

This isn't as weird or hard as people are making it out.

A Nash equilbrium is a set of strategies such that each player's strategy is a best response to the other players' strategies.

(100,100) isn't self-consistent. At (100,100), both players would wish that they had chosen 99. Therefore, (100,100) is not a Nash eq since neither strategy is a best response to the other.

(100,99) isn't self-consistent either. Here, the high bidder earns 97 and wishes that he had bid 98 to earn 100. So this is not a Nash since the high bidder's strategy is not a best to the low bidder's.

(99,98), (98, 97), and all other pairs down to (2,2) are not self-consistent either. Only (2,2) is self-consistent. Here, both players are glad that they did not bid higher, because then they would have received only \$0 instead of \$4.
posted by ROU_Xenophobe at 1:08 PM on May 30, 2007

Blazecock: that's only true if your choice affects your opponent's actions. And in a one-turn game, how can it? No matter what your opponent chooses, choosing 99 will always provide a payout equal or greater than what choosing 100 would.
posted by gsteff at 1:09 PM on May 30, 2007

What if it were the same problem, but all the dollar amounts were multiplied by a million?

That would make a difference only if you take into account things that are outside the scope of the problem.

asserting that the math is wrong is both silly and ironically proves the point of the exercise.

It's not that the math is wrong, just the interpretation of the result. It's not "the game's logic" that "dictates that 2 is the best option," it's the game theorist's logic, and it's wrongly applied to get that result, as he hints at in the conclusion. Perhaps there is some technical meaning of "rational" in game theory that I do not know, for which choosing 2 is rational, but he conlfates it with the mundane meaning of "rational", thus confusing everyone.
posted by sfenders at 1:11 PM on May 30, 2007

The scary part is that game theory was what our government was depending on to save us all from nuclear annihilation at the hands of the Soviets.

Actually, game theory is what saved us.

A point is, in that when it comes to game theory and nuclear weapons, any advantage for the other person potentially means your destruction. Imagine this Traveler's Dilemma game with a different spin -- if your opponent winds up with more money than you, you die.

In that case, you will absolutely race each other to the bottom. And in the Soviets' case, "the bottom" meant spending themselves into obliteration, because by the 80s, it was clear that open-market capitalism was going to be the winner.
posted by frogan at 1:12 PM on May 30, 2007 [1 favorite]

Rational Choice theory would seem to predict that no one would ever play slots or roulette, and yet they do, in vast numbers.

No, rational choice theory only predicts that either people's utility functions for money are upward-curving in the relevant range, or that they receive side payments in addition to any winnings they might win (ie, they have fun).

I would love to have an explanation of how some game-theorists are so lacking in meta-cognition or insight into real games and real rationality. Is it simply a fetishisation for mathematical models or something deeper and more perverse?

It's a part and parcel of modeling -- whittling away aspects of the real world until you're left with the hard core that, hopefully, describes the actual strategic interaction between the players. Sometimes, as here, the hard core you're left with does not accurately capture the actual strategic relationship between actors.

Here, the results are still interesting. They tell you that this sort of interaction is, at it's core, not an exercise in simple one-shot maximization. Something more complex is going on here, something that probably involves social interaction, social payoff, and social sanction at some level. At its core, this isn't a numbers game, it's an ape game.
posted by ROU_Xenophobe at 1:16 PM on May 30, 2007

As far as I can tell, a large number of practicing economists and game-theorists are unwilling to concede this point to a degree I cosider foolish and harmful.

This I'd generally agree with, although I'm not sure how harmful it is compared with other unwarranted assumptions economists may make. There's an assumption that what is described by game theory as rational is truly rational (neatly skewered by this example), and there's also an assumption that people behave rationally. Together those two combine to make the doubly unwarranted assumption that people behave in ways described by game theory, but I think the latter of the two assumptions is much more harmful than the former.
posted by DevilsAdvocate at 1:18 PM on May 30, 2007

oaf, \$2 is the minimum in the stated problem.

That's what I get for skimming the question and jumping straight into the analysis.
posted by oaf at 1:18 PM on May 30, 2007

Centipede (no relation to the video game) is, I think, a simpler introduction to the same topic. Like the game here, the only Nash equilibrium is Pareto-inferior. Like here, when actual people play the game, they usually end up at outcomes at least partway along the tree.
posted by ROU_Xenophobe at 1:19 PM on May 30, 2007

When playing this simple game, people consistently reject the rational choice. In fact, by acting illogically, they end up reaping a larger reward--an outcome that demands a new kind of for­mal reasoning

They do not give us the true price of the artifact to work with. I wonder how many folks would be honest, if they knew the true price. Not very gamey.

I think the question at the top oversimplifies the game. Over many iterations, it becomes more difficult to chosse, say 99, every time, as your payout will vary. If each iteration is against a new opponent, then your strategy will tend to vary widely, as you will probably, sooner or later, come across someone who stiffs you by choosing 2. Each time you get stiffed, you are probably more likely to try that gambit yourself. Although playing 2,2,2,2... is not very profitable, and therefore should not be considered logical, either, Mr. Spock.

Don't we see the same problems in the tragedy of the commons, though? Folks do not do what is best for everyone or themselves all the time. Isn't that part of the fun of being human?
posted by valentinepig at 1:26 PM on May 30, 2007

Quiet down, hippie.
posted by cortex at 1:36 PM on May 30, 2007

But the model is not bizarre or broken, it's just badly applied. I agree—pretty much everyone in this thread agrees—that actualy humans put to this trial in a realistic setting would pick something other than \$2, but that says nothing about the soundness of the model when applied to situations better fit to it.

Well that's the problem. Badly applied means the same thing as "wrong" in this case. If someone came up with a story problem like
"Jane has two pennies, and Jake has 8. How many pennies do they have combined?"
and you said
"28, because Combination(8,2) = 28"
The math is right, but the answer to the problem is WRONG.

And for people who think it's obvious that \$100 is the right bid and \$2 an insane bid -- do ask yourself why, if maximizing the money you take home is the goal, you would bid \$100 rather than \$99

Yes, that's true. But once you get to 97, you start to lose money then if you'd pick 100. That's the problem here. There's no motivation to go below 98 unless you assume your opponent is following this model. But if assume your opponent is going to pick randomly, it's best to pick 99. You have to "model" your opponent. We have three choices.

1) "Normal" human who picks \$100 or \$99
2) "broken game theory player" who picks \$2
3) No assumption: all choices are equally likely.

Those of you who are saying #2 is the rational choice are wrong, #3 is the only "information free" choice you can make.

Sure!

99. 99, 99, 99!

And I'd be happy with \$98*n then \$2*n
posted by delmoi at 1:37 PM on May 30, 2007 [1 favorite]

2) "broken game theory player" who picks \$2
2a) desperate completely broke person who picks \$2

3) No assumption: all choices are equally likely.

"All choices are equally likely" is itself an assumption. Not a wholly unreasonable one, mind you, but it shouldn't be characterized as "no assumption."

And I'd be happy with \$98*n then \$2*n

Well, if you pick 100 and I pick 99 every time, you're only getting \$97*n. But if you find that acceptable, we've reached a Nash-happiness equilibrium.
posted by DevilsAdvocate at 1:44 PM on May 30, 2007

But #3 is not information free: it assumes you have the information that your opponent will play utterly at random. Why would that be true? Did no one tell him about the game? Is he an RNG automaton?
posted by cortex at 1:46 PM on May 30, 2007

Suppose you're playing 1K iterations. I think that makes every choice just as likely as the next because you do not know where your opponent(s) are in their strategy. You c/b up against someone going 100,99,98,97,96,95... first, then a 2,2,2,2,2... second, than your RNG third, your hyperrational 99,99,99 next, the truthy guy at 43,43,43... making, for the point of the game #3 as true as possible.

If it's coretex v pig, then I'm sure we're coming home with big payouts. But coretex v DA v delmoi v Xeno v RNG v RNC v Mumia...
posted by valentinepig at 1:54 PM on May 30, 2007 [1 favorite]

dmmt cortex, cortex! COOOORRRRRTEEEXXXX!!!!!!
posted by valentinepig at 1:55 PM on May 30, 2007

(I hope I'm not missing anything, I'm doing this while badly distracted at work, but I'm confident this is close.)

DevilsAdvocate: Yes, you are right, so it is 2 out of 9,801 combinations that are superior. Still, hardly enough to merit changing your strategy.

It is true that you face 99 choices, but there are 9,801 possible outcomes. But even if you only consider the 99 choices, 2 of the 99 are the same (basically), and 97 of the 99 are inferior. Again, hardly enough to merit changing your strategy.

gsteff: I think you're much more in agreement with me than not. 99 is "no better" than 100, and both are certainly superior to 2.

Basically, if you are playing this against a normal person, choose 100 or 99, whichever seems "luckier" to you. If you are playing against an economist, choose \$2. /smirk
posted by Ynoxas at 1:58 PM on May 30, 2007

But #3 is not information free: it assumes you have the information that your opponent will play utterly at random. Why would that be true?

Cortex, I'm talking about information in an information-theory sense. If you have zero information, that means all possibilities are equally likely.
posted by delmoi at 2:11 PM on May 30, 2007

Yes, you are right, so it is 2 out of 9,801 combinations that are superior.

No, if you insist on counting individual combinations, it's 2*99=198 where 99 is superior to 100. Let's denote the combinations as (x,y) where x is your number, and y is the other player's.

For the combination (54,32), switching your 54 to 99 is equivalent to switching your 54 to 100.

For the combination (54,100), switching your 54 to 99 is better than switching your 54 to 100.

For the combination (31,99), switching your 31 to 99 is better than switching your 31 to 100.

For the combination (31,83), switching your 31 to 99 is equivalent to switching your 31 to 100.

For any combination (x,99) or (x,100), replacing x with 99 is superior to replacing x with 100. By your method of counting, there are 198 out of 9801 situations where 99 is superior to 100, not 2 out of 9801.

Again, hardly enough to merit changing your strategy.

Do you have an objective measure of "hardly enough?"
posted by DevilsAdvocate at 2:16 PM on May 30, 2007

But if assume your opponent is going to pick randomly, it's best to pick 99.
Nope. Either \$97 or \$96. Check the outcomes:
\$100: 100 or 97 or 96 or 95 or 94 or 93 or... Sum 4853, Expected value \$49.02
\$99 : 101 or 99 or 96 or 95 or 94 or 93 or... Sum 4856, Expected value \$49.05
\$98 : 100 or 100 or 98 or 95 or 94 or 93 or... Sum 4858, Expected value \$49.07
\$97 : 99 or 99 or 99 or 97 or 94 or 93 or... Sum 4859, Expected value \$49.08
\$96 : 98 or 98 or 98 or 98 or 96 or 93 or... Sum 4859, Expected value \$49.08
\$95 : 97 or 97 or 97 or 97 or 97 or 95 or... Sum 4858, Expected value \$49.07
\$2: 4 or 4 or 4 or .... or 2 Sum 394 Expected value \$3.98

I kind of have stuff to do tonight, and am forcing myself not to work on a generalized solution for a given range and penalty.
posted by ormondsacker at 2:39 PM on May 30, 2007 [2 favorites]

Ha! Thanks, ormondsacker. I was thinking about the same thing.
posted by cortex at 2:42 PM on May 30, 2007

I think it's interesting that simply telling the truth isn't even a parameter to be tracked in these games. "Greetings, Professor Falken..."
posted by It's Raining Florence Henderson at 2:51 PM on May 30, 2007

I have the unfortunate impression that the dilemma could be expressed in a real world scenario, which would have almost everyone ending up at \$2.
posted by nervousfritz at 3:14 PM on May 30, 2007

I kind of have stuff to do tonight, and am forcing myself not to work on a generalized solution for a given range and penalty.

I have stuff to do too, so naturally...

For the range 1 to n*, and a penalty p, if the other player randomly picks an integer between 1 and n, inclusive, with equal probability, then the expected value from choosing x is [-x2 + (2n-4p+1)x + (2np+2p)]/2n. The optimal choice is n-2p+1/2, rounded to the nearest integer. (If 2p is an integer, n-2p and n-2p+1 are equally good.) If 2p is an integer, the expected value from your optimal play is (n2-2np+4p2+n)/2n

*If the range doesn't start at 1, just shift the whole thing so it does. E.g., to analyze 2-100, just analyze 1-99 then add 1 back in to both the optimal strategy and to the expected values when you're done.
posted by DevilsAdvocate at 3:40 PM on May 30, 2007 [1 favorite]

Is he an RNG automaton?

So, just in case anyone else was wondering how an RNG would do compared to fixed-number-picking automatons, in a random population of each, where the random ones choose randomly between n and 100.... I ran a tournament, eliminating the worst performer after each 1970000 trials. Each trial against a random opponent from the remaining pool, that is. This seems to me at least as sensible as iteratively eliminating dominated strategies.

Always picking 95 seems to be the winner, beating out the best of this class of RNG, which picks at random from 94-100.

So to rant just a bit more about that game theory stuff, imagining how it applies to the real world, the only way you can rationally choose a strategy based on always selecting a number lower than what others are likely to choose is to know something about the likely choices of the population of people you're playing with. The only way to get that information is by playing the game with them for a while. In doing that, you're revealing your own style of play. And if you always pick 2, they're going to go play with someone else. This is roughly how we evolved to be altruistic, I suppose.
posted by sfenders at 3:43 PM on May 30, 2007

Incidentally, this thread has had me thinking about the RoShamBo Challenge [also] from a few years back: folks pitting algorithmic Rock-Scissors-Paper bots against one another in round-robin tournament play.

One of the interesting results of that was that non-random programs couldn't crack random programs, but could compete significantly with one another for much better success rates than random. So RandBot would win 50% of throws over the long term, against either a RandBot or a HeuristicBot. In that sense, RandBot was an optimal conservatie player: he could not be forced to lose, if "losing" is defined as doing significantly worse than 50%.

But given HeuristicBotA vs HeursticBotB, one or the other could come out significantly ahead. After a couple of years of RoShamBo, some very strong (and, generally, simple) heuristics came out of the competition. So if "losing" is defined as doing significantly worse than your best-performing competitors, the RNG strategy goes from being optimal to being shit.
posted by cortex at 4:01 PM on May 30, 2007 [1 favorite]

Of course you'd pick \$100. If the objective is to maximize the money for yourself, that's the number most likely to do that. If your only objective is to fuck over your opponent, then you pick \$2, and make \$4. What an asinine idea of logic. Making \$4 is better than making more than \$4? Their definitions of logic need to be deeply rethought.

If I pick \$100 then the worst I will get is whatever my opponent picks - 2. I don't slightly get why they think any number other than \$100 makes sense. Why is making \$4 better than making greater than \$4?
posted by MythMaker at 4:59 PM on May 30, 2007

The dude's obviously got at least \$200 on him; probably more. I say we knock him the fuck out and make a dash with the cash.
posted by Eideteker at 5:16 PM on May 30, 2007 [1 favorite]

Making \$4 is better than making more than \$4? Their definitions of logic need to be deeply rethought.

Making \$2 or \$4 for sure is better, in the strict sense of assuring some gain in a trial with an unsympathetic opponent, than possibly getting \$0. That's the measurement. That "unsympathetic opponent" doesn't seem to fit the story problem presented isn't a failure of the model on its merits, it's a failure to fit a well-matched model to the situation.

That's the authors complaint, in a sense: these models don't fit the situation! But it's being read as an indictment of the model, rather than the fit, which is silly.
posted by cortex at 5:20 PM on May 30, 2007

Eideteker wins the thread with the metasolution. So appropriate for Metafilter.
posted by localroger at 5:30 PM on May 30, 2007

I knew John von Neumann. John von Neumann was a friend of mine, and Kaushik Basu, you're no John von Neumann.
posted by imperium at 5:39 PM on May 30, 2007

The current state of the art in game theory works perfectly under some assumptions:

(Note: not a game theory Ph.D., I'm sure this is not a canonical list, and much of my game theory background is in the formal analysis of two-player zero-sum games, not economic game theory.)

1) All actors are perfectly rational, and will assume that all other actors are perfectly rational, and that all actors will assume that all other actors will assume that all actors are perfectly rational, and so on, ad infinitum.

(This is a very poor model for humans, who generally aren't rational, and will usually only make the assumptions about assumptions back and forth to 1 or 2 levels. The model becomes better as humans become more expert, but making people experts tends to run afoul of the problems below. Exceptions tend to be in actual games, like chess, go, and such.)

2) The game is independent, such that a cooperative strategy is irrelevant. (By "cooperative strategy" I mean one which is sub-optimal for an individual game, but may retrain the assumptions of the other actors to provide a closer-to-optimal solution over many games).

(This is also a poor model for humans, most of the time, because repeated interaction is very very hard to eliminate, especially if you also want to make people 'experts'.)

3) The rewards for the game are the only important rewards.

(Again, bad model for humans, but it gets better as the rewards for winning become more extreme, thereby dominating external rewards. E.g., not looking like an asshole over \$2 is a lot different to a typical human than not looking like an asshole over \$2,000,000.)

Game theory does worse than humans at this particular problem because humans have adapted to situations where 2 and 3 do not apply, and because very few humans know how to apply the logic in 1 to more than a couple of levels. (And thus, even those humans who know how tend not to, because slightly smarter ones will realize that their assumptions about other actors' assumptions are wrong after a few iterations.)

A similar, real-world case where this kind of logic does apply (though not perfectly) is in competing commodities businesses: if you have a few businesses who can agree to all keep their prices equally high, then all businesses make a lot of money, but there is a strong incentive for each individual business to drop its prices a bit and screw the others: thus, we have some stable cartels with only a few hyperrational operators who can trust each other, and many industries with lots of businesses driving each other to the very edge of profitability. (For example: OPEC vs gas stations.)
posted by reventlov at 6:33 PM on May 30, 2007 [1 favorite]

2) The game is independent, such that a cooperative strategy is irrelevant.

The thing that makes the difference with those "competing commodities businesses" where a minimum-profit equilibrium is approached is not that they're more perfectly rational, it's that there, the choice is *not* independent, it's iterated. (Probably also required in practice would be that new competition is free to enter the market, among other things.) In order to set prices, they can try selling at whatever price they guess might work, based on their knowledge of the other players, then incrementally adjust it to maximize profit.

Similarly, the logic that leads to that kind of equilibrium does not apply to the traveller's dilemma unless you have an iterated game with at least one person in it who's not willing (due to fear of missing out, or whatever) to cooperate to some extent even when it is in their rational self-interest to do so. Yes, I have no easy way to quantify what number the traveler *should* choose; lacking such cleverness does not lead me to expect that the rational thing to do is leap to the only game-theoretically interesting number in sight. That makes about as much sense as picking the largest prime number in the set of choices.

If there's any actual logic or reason to show that the rational choice in such games is always a Nash equilibrium where there is one, (which is not the same thing as simply calculating what that equilibrium is,) then surely someone would've mentioned it by now. It's the usual meaning of equilibrium that it's a place where a gradually-adjusting process can settle down and stay for a while, it's not some desirable zenith of happiness. If you have just one chance to pick a number, there is no iteration of the game, so there need be no iteration of weakly-dominant strategies displacing each other in your rational mind while picking one. Rather, any rational person would come to the conclusion that, unlike the prisoner's dilemma, there is no obvious answer to this problem in game theory, and so use some other means. If the SciAm representation of game theory is accurate, then game theory has some serious problems.

Game theory: All actors are perfectly rational.

Inogo Montoya: You keep using that word. I do not think it means what you think it means.
posted by sfenders at 7:55 PM on May 30, 2007

Rational: doing the optimal thing for the situation you are in, based on the relative values you place on the possible outcomes of your actions.

(Mathematical) game theory is based on axioms, just like the rest of math. If you attempt to apply it to a situation where the axioms are false, you will get bad results. Saying game theory is "broken" or that there is something "seriously wrong" with it is analogous to saying non-Euclidean geometry is broken because it doesn't correctly model Newtonian space.

Game theory is based on axioms that all actors are perfectly rational, perfectly smart, have certain valuations of different outcomes that do not necessarily work in the real world, and know that every other actor is the same, including this knowledge. It's this last part that causes the infinite iteration that everyone seems to be having problems with.

So, for instance, based on this thread so far, a perfectly rational, shameless, absolutely greedy actor would be most rational to pick \$99: this gets him (slightly) more money than any other strategy, given that he'll be getting \$101 to everyone else's \$100 most of the time. Even knowing this, most of the people here probably wouldn't change their bid, or would change it only slightly: the \$1-2 difference is considered insignificant: they are not absolutely greedy.

If, instead, you put in two absolutely greedy super-genius bastards who knew each other for what they were, and knew that they'd never play this game, nor any similar game with each other again, you'd get the game theory prediction of \$2. This isn't the optimal strategy in the real world, but it is the greediest strategy when you're playing against an absolutely greedy super-genius bastard who thinks you are exactly the same as him.

Now, a group of actors will get the most out of cooperation, which is probably why people don't tend to be absolutely greedy: people in cooperating groups are more likely to survive than those who shortsightedly try to maximize their own benefit to the exclusion of all else. But this is not the most rational behavior, in all cases, for each individual.

We tend to cooperate with each other even when it isn't in our absolute self-interest because it isn't usually worth evaluating every situation to figure out whether it is or not. (Think of, for instance, tipping the waitress at a restaurant you'll never visit again.) This isn't a surprise to much of anyone--including, I suspect, most of the economists who have been trying to use game theory to model markets--but I've only recently started seeing studies pop up trying to quantify the irrationality of humans.
posted by reventlov at 10:20 PM on May 30, 2007

Making \$2 or \$4 for sure is better, in the strict sense of assuring some gain in a trial with an unsympathetic opponent, than possibly getting \$0.

Yeah, I can see that's true, but no sane person would pick \$2.

Everyone would think you were an asshole, and you'd be losing money, unless your lost object really did cost \$2.

Humans are cooperative animals. It's really quite reasonable, in this example, to assume that the other person is going to pick a number greater than 2.
posted by MythMaker at 11:47 PM on May 30, 2007

Bottom line, picking \$2 means the maximum you can win is \$4, and the minimum is \$2. Picking \$100 means the maximum you can win is \$100 and the minimum is \$0. There's not even any game theory logic for a race to the bottom. The priority is never to "beat your opponent" in a game like this, because, as has been pointed out, the other player is on your side. Idiotic article.
posted by imperium at 1:58 AM on May 31, 2007

I want to see a game show version of this, except where the potential grab is something like \$10,000. So when one contestant screws it all away on a \$4 sure thing, insane, ratings-grabbing violence can ensue.
posted by dreamsign at 3:41 AM on May 31, 2007

If, instead, you put in two absolutely greedy super-genius bastards who knew each other for what they were, and knew that they'd never play this game, nor any similar game with each other again, you'd get the game theory prediction of \$2.

Again, we're back to the beginning. You only get this result, as you say, if the genius you're playing against thinks exactly as you do; it's begging the question. That way of thinking is not the best one.

There is no rational reason to suppose that your opponent in the game is incapable of carrying the reasoning a step further and saying to himself "so, we see that there is a line of thought that leads to the conclusion that I should bid \$2. Is it likely that my perfectly intelligent friend will follow that and actually do so?" No, it isn't. He too will realize that if I am rational, and think exactly the way he does, that choosing \$100 will get him more money. If you accept that we think exactly the same, and if no random numbers are allowed in this hypothetical universe, then sure we must each pick the same number. So if I pick \$100, you must pick \$100. Therefore, \$100 is a better choice than \$2. Since we're perfectly rational, we ought to be capable of figuring this out and therefore picking the highest one.

This is not really some kind of "meta-rationality", choosing an "irrational" action because it is likely to do better. It's no more than once again making the same leap that lead to the initial logic of \$2, which, you'll recall, also depends on your opponent thinking exactly as you do. It's a recognition that \$2 is not the hyper-intelligent choice of a perfectly rational actor, it is the choice of a particular machine running one rather limited program which anyone can see is probably not the best one in any meaningful way. It's not the best for the real world, and it's not the best when it can count on everyone else in it's world being perfectly rational. It's the best only in some nightmarish in-between world where everyone is capable of exactly one level of abstract thinking about concepts of self and other.

Okay, so there is still a bit of a paradox involved. Perfectly rational actors can't actually exist in reality, so that's okay with me. In whatever universe they do exist in, they're going to recognize the paradox and resolve it in the way that benefits them, not the way that makes sense to the game theorist who imagined them.
posted by sfenders at 4:15 AM on May 31, 2007

Bottom line, picking \$2 means the maximum you can win is \$4, and the minimum is \$2. Picking \$100 means the maximum you can win is \$100 and the minimum is \$0.

No. Picking \$100 means the maximum you can win is \$0, and the minimum you can win is \$0, because you can predict that the other player has already chosen \$2.

There's not even any game theory logic for a race to the bottom.

I have not put together a 98 x 98 payoff matrix, but it should be the case that (2,2) is the unique Nash eq. At the very least, if you give me any other set of bids, I can show you that they're not a Nash. Whether something is a Nash equilibrium or not has a wee bit to do with game theory logic.

The result here depends on the features of the Nash equilibrium concept. An outcome is a Nash only if no player could do better by changing his strategy. For (100,100), either player could do better by switching to 99. For (37,63), the player bidding 63 could do better by switching to 36. The only instance where at least one player can't do better by switching to \$1 less than the other player is when doing so is impossible, so (2,2) is the Nash.

Not being a Nash doesn't mean "IT WILL NEVER HAPPEN." It only means "IT IS NOT A SELF-SUSTAINING STABLE EQUILIBRIUM THAT WILL EMERGE FROM THE INTERACTION OF RATIONAL EGOISTS." Any outcome other than (2,2) has to be sustained by something else, that is all.

The priority is never to "beat your opponent" in a game like this

As lots of people have said, it's not the case here either. You're just maximizing your own payoff in an environment where that payoff is a function of your bid and the other player's bid.

There's really nothing very mysterious about this. The Nash in this game is strongly Pareto-inferior, and lots of games with Pareto-inferior equilibria don't exhibit those equilibria in real life or experiments. This isn't hard to figure out. When something sucks, people look for ways to decrease the suckage. And in experiments, there's nothing on the line to drive out players who behave irrationally in a one-shot game. If a zero-payoff meant death, you'd probably see either more \$2 bids, or more explicit mechanisms designed to shove people towards better outcomes (ie, anyone who bids \$2 is killed by an angry mob).
posted by ROU_Xenophobe at 7:44 AM on May 31, 2007

"IT IS NOT A SELF-SUSTAINING STABLE EQUILIBRIUM THAT WILL EMERGE FROM THE INTERACTION OF RATIONAL EGOISTS."

Yes. Since in the single independent game there is no interaction between them but for the outcome of the game which happens only after they've made a choice, there is no opportunity for an unstable equilibrium to become destabilized, and the Nash equilibrium does not determine the outcome.
posted by sfenders at 8:46 AM on May 31, 2007

The priority is never to "beat your opponent" in a game like this, because, as has been pointed out, the other player is on your side. Idiotic article.

The article does not assume that the priority is to "beat your opponent." The whole point of the article, as has been pointed out at least half a dozen times in this very thread, is that classical game theory indicates that players should choose 2, even given that they are trying to maximize their own return and don't care about beating the other player. Idiotic comment.
posted by DevilsAdvocate at 9:02 AM on May 31, 2007

It's kind of fascinating that trying to squeeze one more dollar out of your return causes you to minimize it instead.
posted by empath at 9:33 AM on May 31, 2007

This thread brings back memories of having discussions about the Monty Hall problem, which is another of those problems for which people will actually get to I-want-to-fight-you levels of anger if you persist with the right reasoning.

The reasoning a "rational" player would use to arrive at \$2 is something like this:
(1) I want to maximize my outcome
(2) My opponent wants to maximize his outcome

Suppose my opponent thinks I am going to say \$100; then, my opponent's rational choice is \$99, so as to get the \$101 payoff. However, my opponents knows that I know that if he thinks I am going to bid \$100 then he will bid \$99, and my opponent also knows that if I know he is going to bid \$99 that my optimal choice is to bid \$98, so as to get a payoff of \$100. But, in turn, my opponent knows that I know that if I have followed this train of logic to the point that I am bidding \$98 he can do best by bidding \$97, but he knows that I know that he knows that and thus I will anticipate his underbid and respond by bidding \$96, which is an improvement of my outcome in the event he bids \$96...nearly ad infinitum, until we arrive at the \$2 bids.

Similarly, suppose my opponent thinks I am going to say \$99; then, my opponent's rational choice is \$98, so as to get the \$100 payoff, but then my opponent knows that I know that he will do that so I will actually bid \$97 so as to get the \$99 payoff...nearly ad infinitum, until we arrive at \$2 bids.

Similarly, suppose my opponent thinks I am going to say \$98; then, my opponent's rational choice is \$97, so as to get the \$99 payoff, but then my opponent knows that I know...

...all the way to the \$2 bids, which provide a floor beyond which we can't continue this reasoning. This is the kind of logic "rational" players are using in this game, spelled out a bit more explicitly.

But more to the point: without some kind of hyper-rationality a la Hofstadter upthread, there's no reason for a "rational" player to expect both parties to bid \$100 when the other party is "rational", as each rational player would know that the other would bid \$99 if they knew the other would be bidding \$100. Essentially: if I think you are going to bid X I will maximize by bidding max(2,X-1), and taking the transitive closure of these implications means the only position a rational player can be assume to take is \$2.

It's quite obvious that real people do not think in this fashion except under special circumstances -- while playing high level chess and poker, for example -- and even when people do think in this fashion they typically have trouble following chains of reasoning longer than a few (as in, one to two) steps.

There's nothing terribly revolutionary here other than an example that people not only do not typically think by reasoning from a Nash equilibrium but also are not typically able to grok the thinking strategy without some serious prior exposure; I think the latter is a quite telling argument contra the appropriateness of modeling real-world "rational" agents in this fashion except when it has been explicitly shown that this reasoning strategy is a good fit to their past behaviors.

In any case, this scenario is not as contrived as it sounds: it is not all that far removed from any situation where N+M parties submit sealed bids -- without communication between parties -- for something like a large construction project, with the purchaser choosing to divide the project amongst the lowest N bidders.
posted by little miss manners at 10:21 AM on May 31, 2007

yeah, I think multiplying by millions makes it more intuitive. the only way to absolutely guarantee that you get anything at all is to choose \$2. Otherwise there is a small but actual chance that you will get nothing. If it's \$2 vs. nothing, most people wouldn't really care, so choosing \$100 seems the obvious choice. But if it's \$2million vs nothing, it's less confusing why someone would play it safe.
posted by mdn at 10:24 AM on May 31, 2007

The way to make the game make sense is to remove the extraneous information about it that distracts people. Here's the game:

1. You and another person will each pick a number between 2 and 100.

2. You will not be told what number the other person picks.

3. Whoever picks the lower number wins.

4. If you both pick the same number, you both win.

5. The object of the game is to win.

In that game, everyone with half a brain will always pick 2.

And that's really the game that the hypothetical thinks is being played.
posted by The World Famous at 10:54 AM on May 31, 2007

I'd say how ever much the vase cost. Anything more would be stealing.

Then I'd call this guys boss and tell him what a douche bag he has working for him for thinking up this insulting game.
posted by Bonzai at 11:06 AM on May 31, 2007

Ugh. One more time. No tricky Hofstadter-style reasoning required:

Suppose my opponent thinks I am going to say \$100; then, my opponent's rational choice is \$99, so as to get the \$101 payoff. However, my opponents knows that I know that if he thinks I am going to bid \$100 then he will bid \$99, and my opponent also knows that if I know he is going to bid \$99 that my optimal choice is to bid \$98, so as to get a payoff of \$100. But, in turn, my opponent knows that I know that if I have followed this train of logic to the point that I am bidding \$98 he can do best by bidding \$97, ... so if he is bidding \$97, the best he can expect is \$99, less than the \$101 maximum; and the best I could expect is also \$99, less than it could be if we both chose \$100. So if he follows *this* logic, he must bid \$100 and expect me to do the same. But then I should obviously bid \$99, so he must go with \$98. So I must pick \$97. Knowing that he is rational, I must therefore pick \$100. And so on.

This script uses only very slightly more sophisticated logic, and also has the advantage of being closer to what happens in reality, like so:

it is not all that far removed from any situation where N+M parties submit sealed bids -- without communication between parties -- for something like a large construction project

It is somewhat similar, yes. For N=M=1, what happens when their bids are exactly equal? Assume they expect 50% of the number when that happens. The perfectly rational choice is then to bid the maximum. In a realistic version of course they don't know the maximum beyond which the project will be cancelled, there is a very small chance of them all choosing the same estimate of it, and they also have various estimates of each others' capabilities and overhead costs and so on. So they rationally choose the maximum they think they can get away with; not the minimum possible.

Look, reality agrees with me on this; you just don't see construction companies intentionally doing things for cost plus \$0.01, and when you see them get relatively close to that, it's because of their acquired knowledge of the other bidders' willingness to bid low. In other words the interaction between them pushed them towards the equilibrium, while their rational self-interest keeps them from actually reaching it.
posted by sfenders at 11:47 AM on May 31, 2007

sfenders: I do not dispute what real people and real organizations do in real life -- they certainly don't behave the way a "rational" player is supposed to in this game. I brought up the bidders to point out that this scenario isn't as contrived as some are making it out to be.

Despite your protests you do need Hofstadter-style reasoning to obtain the result you are claiming; to see this, consider this question: "if I am obligated to choose the highest-paying option available, what would I need to know for \$100 to be that best possible choice?"

It can't be that my opponent is also going to bid \$100 -- because if I knew that my opponent was going to bid \$100, then my best choice would be to bid \$99 and get the \$101 payout.

It can't be that my opponent is also going to bid \$99 -- because then my optimal bid is to bid \$98 and collect \$100 (bidding \$100 is going to net me \$98 in this context).

...

It can't be that my opponent is also going to bid \$2, because then my optimal bid is to bid \$2.

So there's actually no play my opponent might make for which a \$100 bid is optimal on my part, and thus a \$100 bid is never "rational" in this game. The only way the \$100 is "rational" -- in the sense of the best possible choice -- is if the kind of rationality is akin to Hofstadter's.
posted by little miss manners at 12:27 PM on May 31, 2007

"what would I need to know for \$100 to be that best possible choice? -- It can't be that my opponent is also going to bid \$100"

Okay, I get it. That does make the choice of \$100 a different sort of reasoning, which I should have seen, and the logic I suggested doesn't quite get to it.

But I'm still not convinced that we can't rule out \$2 as the best choice with the reasoning I was using. What's wrong with it? Just that it doesn't produce any result? Doesn't that simply mean that there is no best choice in that system of logic?
posted by sfenders at 1:09 PM on May 31, 2007

Look, reality agrees with me on this; you just don't see construction companies intentionally doing things for cost plus \$0.01

But you do, when they see some other valuation to winning the contract. They'll go down below cost, if the situation is right: winning the bid may mean winning the contract no matter what (they're trying to push into, or push someone out of, a market; they've got outstanding obligations; they have porkbarrel understanding with an interested party; etc), rather than realizing a healthy above-the-board profit on the immediate contract.

It comes down to picking examples. Constraining myself to the original premise: to suggest that pocketing ~\$100 must be worth more to a rational human being than winning the nonce contest is to suggest that there are no rational assholes or trixsters, which is blatantly false.

So how does fitness of the model change if our two passengers shared a miserable trip in adjascent seats and loathe one another? Or if one secretly loathes the other? The example doesn't give us that information, either, and it's only our happy assumption that there's no spite between these two aggrieved passengers, that they have mutual and mutually understood feelings of unquestioning goodwill toward one another that makes the altruistic "of course we'd both just want to make the best of things" argument.

It's ridiculous and lazy to try to fit the basic game theory model to the example in the article and then complain that it doesn't work; and it's wrong-headed to read the article as an effective indictment of the the model as it is stated and understood by reasonable theorists. That some bull-headed game theorists and economists et al might blindly, stubbornly apply a badly fitted model to a situation is hardly unique to that one field.
posted by cortex at 1:21 PM on May 31, 2007

The World Famous: When defined that way, yes the answer of 2 makes sense.

But, that isn't the way the game was defined. The game as defined does not have a binary outcome (win/lose), but rather a scalar outcome (win with X points), and its stupid to think that won't change things. The benefit of winning is tied directly to the magnitude of the number picked, so I don't think that's exactly "extranious".

I dig game theory, but I think the people who came up with that answer messed up in that they were ignoring the value of the win as opposed to the fact of the win. Winning with a value of 2 is pointless.

If we designed a genetic algorithm to play this game and let things evolve for a few thousand generations I'll guarantee you that the surviving algorithm wouldn't chose 2 as its answer all the time.
posted by sotonohito at 1:25 PM on May 31, 2007

Er, my genetic algorithm example assumes (as most genetic algorithms do) a non-zero reproduction cost.
posted by sotonohito at 1:26 PM on May 31, 2007

"But I'm still not convinced that we can't rule out \$2 as the best choice with the reasoning I was using."

Okay, never mind, yes I am. For the constraints of that particular system of logic, I mean, which doesn't look much like what we normally think of as "rational".
posted by sfenders at 1:37 PM on May 31, 2007

5. The object of the game is to win.

Except, as advertised, that is not the object of the game. The object of the game is for Lucy and Pete to maximize their individual compensations for the lost items. 100 is the best answer for both.

If the object of the game were to simply end the game with more money than your opponent, then 2 is indeed the best answer.

So, it's a stupid question all around.
posted by frogan at 2:28 PM on May 31, 2007

100 is the best answer for both.

No, it is not. 99 is the best response to 100. And 98 is the best response to 99. And 97 is the best response to 98. And, and, and, 2 is the best response to 2. So 2 is where it equilibrates.

Again, this isn't anything mysterious or weird. It's just a quirk of the definition of the Nash equilibrium, which is based on strategies being mutual best responses. Of course Nash equilibria break down sometimes, or sometimes aren't the outcome we observe in the real world. The concept of the Nash equilibrium is very simple, and the world and its human inhabitants are very complex. The astonishing thing isn't that you can contrive a game so that the Nash equilibrium is silly, generating the logical counterpart to an optical illusion. The astonishing thing is that a concept as simple as the Nash equilibrium has been as useful as it has in analyzing strategic behavior.
posted by ROU_Xenophobe at 3:10 PM on May 31, 2007

Interesting article and great thread. My 100 cents...

I think if the game were expressed as \$90-100 instead of \$2-100, or even something like \$1m-1.1m then the general consensus might be very different.

Who here would plump for \$1.1m, with a possibility of going home empty-handed, rather than the lowest possible payoff of \$1m?

Not many, i'll bet.

And that suggests that the utility function is important to the outcome. For the vast majority, \$2 isn't going to make the slightest bit of difference to their utility, optimal strategy or not, and hence it's worth taking a punt on getting a higher return. \$1m is a completely different matter, and I would expect behaviour to change accordingly.
posted by saintsguy at 3:34 PM on May 31, 2007

sotonohito: But, that isn't the way the game was defined. The game as defined does not have a binary outcome (win/lose), but rather a scalar outcome (win with X points), and its stupid to think that won't change things.

frogan: Except, as advertised, that is not the object of the game.

Maybe I'm not only stupid, but also a bad reader. Where in the article's setup of the game does it say the object of the game? Where, stonohito, does the article say that the players of the game are instructed to seek a scalar outcome where they must win with X points?
posted by The World Famous at 3:58 PM on May 31, 2007

>> 100 is the best answer for both.

> No, it is not.

Yes, it is.

I'm still not convinced there isn't some way even within the ridiculous bounds of this formal system of logic, whatever they are, to rule out 2 as the best possible answer, thus leaving no best choice. Nobody's proved that there isn't, and 2 only stands by process of elimination.
posted by sfenders at 4:09 PM on May 31, 2007

sfenders, 2 is the best answer according to the model as such. So if the model is what you're referring to as "the ridiculous bounds of this formal system of logic", then, no. It will not happen. The reasons why 2 is the stable, optimal solution within the confines of the specified model have been stated several times by different people in the thread.

If what you're saying is that there must be some way to change the model to get what we perceive as sane, reasonable predictions, well, sure. You'd just have to adapt the model to account for some of these concepts currently external to it. Attaching a more nuanced sense of utility would help: supplement the basic valuation of different results to reflect how you speculate people would react to the situation.

As someone pointed out upthread, the beauty of the model isn't that it's so correct in general, it's that it works so well where it works well, from such simplicity.
posted by cortex at 4:25 PM on May 31, 2007

If it's really only meant to be a model, that's fine. I'm sure most people who use this stuff do treat it that way. However, it isn't presented that way in the article; it's said to be a proof inescapable within the bounds of ordinary reason that the rational thing to do is to pick "2".

"if I am obligated to choose the highest-paying option available, what would I need to know for \$100 to be that best possible choice?"

That my opponent thinks exactly as I do, and will therefore pick the same number.
posted by sfenders at 4:47 PM on May 31, 2007

a proof inescapable within the bounds of ordinary reason that the rational thing to do is to pick "2".

.... within a model where the only parameters are the general ones of game theory discussed above, I mean, none of which say "thou shalt only use this particular system of formal logic", sure.
posted by sfenders at 4:55 PM on May 31, 2007

100 is the best answer for both.

No, it is not. 99 is the best response to 100.

99 is the best answer IF! IF! IF! the object of the game is to defeat your opponent!

Also, there's no response implied -- it specifies that Lucy and Pete are to write their answers down without conferring.

It is NOT the best answer if you're trying to get compensation for your lost antique, as it specifies in the article!

Where in the article's setup of the game does it say the object of the game?

It's in the first paragraph.

Lucy and Pete, returning from a remote Pacific island, find that the airline has damaged the identical antiques that each had purchased. An airline manager says that he is happy to compensate them...

That's it. That's ALL the instructions. Lucy and Pete are to be compensated. Period. There is no inference to Lucy needing to defeat Pete.

It's a poorly worded scenario, and this thread is just us overthinking a plate of beans.
posted by frogan at 5:00 PM on May 31, 2007

I'm still not convinced there isn't some way even within the ridiculous bounds of this formal system of logic, whatever they are, to rule out 2 as the best possible answer, thus leaving no best choice.

2 isn't the best answer. By any standard, (100,100) is better. Dorks like me would say that (2,2) is Pareto-inferior -- there are outcomes that are better for everyone.

What (2,2) is is self-consistent. If I pick 2, the best you can do is pick 2. If you pick 2, the best that I can do is pick 2. It hangs together, and nobody has any incentive to change. It's this self-consistency that's at the core of the Nash equilibrium solution concept.
posted by ROU_Xenophobe at 5:02 PM on May 31, 2007

99 is the best answer IF! IF! IF! the object of the game is to defeat your opponent!

No. If you say 100, I'm better off saying 99, and earning 101, then saying 100 and getting 100. As people have said again and again, and it remains true, I'm not trying to beat you. I'd just rather have \$101 than \$100.

Also, there's no response implied -- it specifies that Lucy and Pete are to write their answers down without conferring.

This just means that each is responding to their best guess at the other's choice. It's still a response in the terms of this sort of model.
posted by ROU_Xenophobe at 5:13 PM on May 31, 2007

The World Famous: I don't think you are stupid, or a bad reader. But I do think that you, like the people who wrote the thing, are a bit too willing to ignore some things.

No objective was explicitly stated in the setup. However, given that the setup revolved around recompense for a lost item, specified as a valuable antique, I think its implicit that there is a desire to meet the winning conditions with the highest possible number, since the numbers are supposed to represent dollar amounts.

In the pure abstraction you described, where we simply count a binary win/loss rather than points then there is no question that the 2 answer is correct. But that wasn't the game as described. That's the abstraction you produced from the game as described, and its flawed in that there is a difference between gaining 2 points and gaining 100 points, whether those points are worth one dollar each or a million dollars each.

ROU_Xenophobe wrote"I'm not trying to beat you. I'd just rather have \$101 than \$100."

But you'd rather have \$100 than \$2, right? So why would a rational player talk themelves down to \$2? More likely a really logical player would find the \$2 equilibrium, and realize that its a poor choice.

There's four possible game outcomes:

1) You chose \$100 the other player choses \$100. That's the maximum balanced outcome.
2) You chose \$99 the other player choses \$100. That's optimum for you, but harms the other guy.
3) You chose \$100 but the other player choses \$99. Optimum for the other player, harmful but minimally so for you.
4) You chose \$2 the other player choses \$2. Minimum balanced outcome, and not worth your time or the other person's time.
5) You chose any number >2, other player choses 2. He gains \$4, you gain \$0.
6) You chose 2, other player choses any number >2. You gain \$4, other player gains \$0.

The top three represent payouts varying a maximum of \$4 from the maximum payout of \$101, a worthwhile payout regardless.

The bottom three represent payouts varying a maximum of \$4 from \$0, all worthless.

Either you can take a chance on a payout of between \$97 and \$101, or you can take a chance on a payout of \$2 with the possibility of getting a whole \$4 if the other guy choses a higher number.

The equilibrium simply isn't worthwhile, so why bother playing it? Shoot for \$100 (or \$99 if you want to be a jerk) and if the other guy choses a lower number you will have lost a maximum of \$2, so who cares?
posted by sotonohito at 5:45 PM on May 31, 2007

Quite. Thing is, I hear there's this crowd of game theorists who seem to be pretty strongly convinced otherwise.

"Game theorists have made a number of attempts to explain why a lot of players do not choose the Nash equilibrium in TD experiments. ... Presumably game theorists know how to reason deductively, but even they by and large did not follow the rational choice dictated by formal theory."

So apparently there's some theory that people should be expected to pick the Nash equilibrium even in a single independent game with effectively no information about how the other players are likely to play. In the real world, even, not just theory-land! I want to understand this theory better, so I can ridicule it more effectively.
posted by sfenders at 5:45 PM on May 31, 2007

"if I am obligated to choose the highest-paying option available, what would I need to know for \$100 to be that best possible choice?"

"That my opponent thinks exactly as I do, and will therefore pick the same number."

Good -- now ask yourself this. Suppose you find out that your opponent has already written down her bid and sealed it in an envelope. Let's say you know it's either \$99 or \$100. What do you bid? If the number in the envelope is \$99, you are better off bidding \$99 (thus winning \$99.) If the number in the envelope is \$100, you are also better off bidding \$99 (thus winning \$101.)

I think your reasoning is something like the following: "I think my opponent thinks like me. If I bid \$99, then so did she, and I win \$99. But if I bid \$100, then so did she, and then I win \$100. So I should bid \$100."

The two arguments lead to different conclusions. But what's strange about the latter argument is that you are forced to imagine that you can affect what's in a sealed envelope by means of your bid. (Or, if you like the problem as originally framed, you can affect someone else's choices by means of your own, without them having any knowledge of what you chose.)

Let me emphasize that that doesn't really mean that your argument (as I've paraphrased it -- please feel free to tweak it if you don't like the paraphrase) is not right. But it should at least give one pause. I would hope that both of the arguments above should seem somewhat reasonable despite their incompatible conclusions -- if so, then you've started to see why there really is a conundrum here. (Newcomb's paradox is not totally irrelevant.)
posted by escabeche at 5:53 PM on May 31, 2007

But you'd rather have \$100 than \$2, right? So why would a rational player talk themelves down to \$2?

Because you think that the other actor has bid \$2. If you believe that the other actor has bid X>2, you should bid X-1. The problem for a Nash equilibrium is that then the other actor's choice isn't optimal, which means that you should question whether you really believe they'll do it.

Thing is, I hear there's this crowd of game theorists who seem to be pretty strongly convinced otherwise.

No. There's this crowd of game theorists who think that \$2 is the equilibrium. They're right. It's just that the equilibrium isn't the outcome that real players often choose. This shouldn't be a surprise to anyone who's seen the centipede game.

People sometimes try to figure out why exactly people don't choose the equilibrium strategies. To me, it isn't difficult in this context: who gives a fuck? Why would anyone bother trying to find the optimal response to possible choices when the payoffs are a few dollars, as they are in the experiments?

But then again, should be really be worried if our theories don't have good explanations for decisions that border on the ludicrously irrelevant, in wholly artificial settings designed precisely to cause these problems? For that matter, is it really worth spending time putting together a theory of how people make decisions that don't matter?

The sort of THIS IS THE ONE TRUE ANSWER! PEOPLE DO THIS AND NOTHING ELSE BECAUSE TO DO OTHERWISE IS IRRATIONAL!!! mentality you're ascribing to game theorists usually dies, in my experience, during a student's first or second course in it, at least in poli-sci.

So apparently there's some theory that people should be expected to pick the Nash equilibrium even in a single independent game with effectively no information about how the other players are likely to play.

No, there's just the Nash equilibrium concept. It's just itself, not a grand theory of human behavior. I am not a historian of game theory, but as far as I know its primary attraction is not that people really believe that all strategic interactions end up at a Nash equilibrium. Its primary attraction is that every game has at least one Nash equilibrium. A secondary attraction is the simplicity of the concept, at least as far as applying it goes.

About the best you can do is say that if you're in a situation where actors who don't play optimally get driven out (ie, firms go bankrupt, people die), you can expect that either the Nash equilibrium will be a likely outcome, or if the Nash is undesirable, you would be able to find the institutional devices that shift the outcome from the Nash to a better outcome.

And none of this would come as a shock to actual game theorists, at the very least not to people who use game theory as a means to an analytical end rather than as a topic of study in and of itself.
posted by ROU_Xenophobe at 6:29 PM on May 31, 2007

But what's strange about the latter argument is that you are forced to imagine that you can affect what's in a sealed envelope by means of your bid.

If I'm perfectly rational, then I can't do anything but what reason dictates, and thus never have the liberty to change my own mind, let alone someone else's envelope. It is strange, anyway, almost as strange as we might expect for the consequences of two perfectly intelligent demons who both think exactly alike.

Xenophobe, you're making way too much sense for this thread. Guess I'll go back to practical applications for now.
posted by sfenders at 7:14 PM on May 31, 2007

unsubscribe
posted by Eideteker at 7:16 PM on May 31, 2007

I want to understand this theory better, so I can ridicule it more effectively.

Whereas I want you to understand it better so you'll stop saying such ridiculous things about it. The thing is, you're staking a position of knowing, chortling dismissal of something you allege not be familiar with against decades of research by some of the smartest guys on the planet. It's just possible that what's giving you so much joy and consternation here is a misapprehension, not a singular watershed insight into a whole discipline.
posted by cortex at 7:44 AM on June 1, 2007

sfenders: (and others), here is a description of how a game theorist assumes "rational" actors play games like this.

For each player in this game there are only 99 options they can choose: the player can bid \$2, the player can bid \$3, ... , the player can bid \$100. For two players, then, that means there are 99^2 = 9801 actions that could be taken, each of the form (player 1 bid, player 2 bid). For each of those pairs of decisions we can easily calculate each player's payoff according to the rules of this game. You could, if you wanted, create a large spreadsheet like so:

[Player 1 \ Player 2] \$100 bid\$99 bid...
\$100 bid(100,100)(97,101)...
\$99 bid(101,97)(99,99)...
............

with the X-axis corresponding to bids by player 1, the Y-axis corresponding to bids by player 2, and the intersection containing "outcome pairs", like (player 1 payoff, player 2 payoff), for the associated bids.

Let's print out this table (assume on a very large sheet of paper). Pick any square to start with -- that is, pick any pair of bids player 1 and player 2 could possibly make -- and ask these questions:
(1) Is this an outcome player 1 would choose? For it to be an outcome player 1 would choose, there would have to be no better outcomes for player 1 in this column -- ie, Player 1 would only make this move if it is the best possible move he can make, given that Player 2 places that bid.
(2) If this is an outcome player 1 would choose, is this an outcome player 2 would choose? For it to be an outcome player 2 would choose, it would have to be the best possible outcome for player 2 in this row -- ie, Player 2 would only make this move if it is the best possible move he can make, given that Player 1 places that particular bid.

You can use (1) and (2) to annotate your printout with arrows: draw a blue arrow from pair X to pair Y if both pairs are in the same column but player 1 does better at pair Y then pair X, and a red arrow if X and Y are in the same row but player 2 does better at Y than X.

For a game theorist, the "reasoning" a "rational" player would perform for this game is essentially the same as if you made the annotated table I described above, and then performed this algorithm:
Step 1: discard any outcomes that have any arrows leading out of them -- the presence of an outward-bound arrow means that one of the players has a better option than this one under the assumption the remaining players play as before, and so it would be irrational for that player to make that choice; consequently this set of choices will never be made by rational players.
Step 2: of the outcomes that remain -- the ones that have no outward arrows -- pick the one with the best outcome for you.

(Note to others: this workup is a little nonstandard and glosses over some of the difficulties in the general situation, but is more than adequate to demonstrate the reasoning in this particular game.)

One important point is that there isn't really a temporal aspect to the reasoning here: if you want to know why, say, (100,100) isn't something "rational" players would do, you can come up with a chain of reasoning to explain it like I did earlier, but there's a sense in which explaining it as a "if this then that but then this but then that..." is more confusing than just drawing a picture and looking at which way the players' options point....and part of the difficulty in explaining game-theoretic reasoning to an audience not already familiar with game theoretic arguments is that it isn't immediately obvious why reasoning of the form "Well, if I thought my opponent is going to bid \$100 I'd bid \$99, but they'd know that and so bid \$98, ad infinitum" is allowable reasoning but "Well, if I think about it this way we both wind up with \$2, which is stupid, and you'll know it's stupid too, so I should bid \$100" isn't allowable reasoning, per the definition of "reasoning" relevant in this context. Drawing some sort of payoff table/matrix/graph is a little more involved a process but does make clearer what kinds of reasoning a game-theoretic rational agent is supposed to make.

The sort of reasoning sfenders proposes -- which is essentially Hofstadter's hyperrationality, phrased differently -- is in this context a kind of meta-reasoning: "Let me put on my game-theorist's hat and see what my payout is in nash equilibrium....hmm...\$2. My opponent would also get \$2 in nash equilibrium...hmmm...that's not very good for either of us. I'm going to take off my game-theorist's hat cause this kind of reasoning is a straight-shot to nowhere I want to wind up, and I know my opponent is going to come to the same conclusion I just did and do the same thing. If I knew that, what would I pick? \$100 is the best, and I know my opponent will agree." Essentially, reasoning that "if I drew those arrows on the graph and my opponent drew the same, then we'd wind up with a bad outcome, so we should instead try a different tack because it's clear we're getting nowhere this way". In real life we call that meta-reasoning "reasoning", but for the sake of game-theoretic analysis of a game like this that level of rationality is not allowed. There is always a problem with the Hofstadter approach, however, and that is a certain kind of circularity: suppose I use that "hyper-logic" to realize that the race to the bottom is madness, decide that I am going to pick \$100, and think that you will do the same -- shouldn't I pick \$99 to get that extra dollar? (or: It's not enough just to be hyper-rational -- you have to be hyper-rational and stop trying to be rational after graduating to hyper-rationality, or the endless cycle repeats.)

The rapid offensive unit and others have already done an excellent job of explaining the uses and abuses of the Nash equilibrium and so forth -- it is a very simple concept that under certain circumstances helps elucidate certain kinds of behavior.

As mentioned earlier this thread reminds me of real-life discussions of the Monty Hall problem and similar counterintutive results -- there are certain kinds of mathematical thought with you can reliably enrage people by insisting on the correct answer (another in that category: I have two children, each of whom have 50% probability (independent) of being a boy. If one of them is a boy, what are the odds I have two boys? -- simple conditional probability, I have seen people almost come to blows when I insist it's 1/3). From my anecdotal surveying you can pull 10 college-educated people off the street, ask them the question about two boys, and be lucky to get more than 3 that will get 1/3 on the first answer; more surprising, though, is that there's often a few who will will insist you're wrong, forever, no matter how you explain it to them. For something like the game theoretic reasoning in this article it's more like 1 who will grok how the "rational" agents are supposed to act, once it's exlained, 3 who will at least admit they're confused, and then another 6 who will insist you're wrong, which in this context means that they seemingly can't put their common-sense understanding of "rationality" on hold long enough to understand the kind of reasoning being employed by the game-theorist. (These anecdotal "surveys" are mainly from job interviews I've given over the years, for what it's worth, and suprisingly often with those who ostensibly have solid number senses.)

And thus my conclusion: I've always thought that that difficulty even grokking the game theoretic reasoning was the best argument against its general appropriateness as a model of actual human thought processes. Yes, it is true that empirically people don't act in a way consistent with their reasoning from Nash equilibrium -- we have an example of that in the linked article, and other real-life examples abound -- but that doesn't necessarily mean they are not trying to reason in that fashion: it's possible that people do attempt to strategize in this way, but just, on average, are terrible at actually reasoning in this fashion. What is to me a pretty striking argument against the applicability of this model as a model of thought is the tremendous difficulty it seems many people have even understanding how this kind of reasoning is supposed to work -- if it is happening at all, it pretty has to be subconsciously (not airtight, I'll admit, but just my opinion anyways).

I wouldn't be surprised if the underlying cognitive mechanism is essentially monte carlo, which for skilled players playing skilled players with realistic mental models of each other will suggest they follow strategies clustering close around the nash equilibrium, but for unskilled players with unrealistic mental models of each other might very well suggest actions like those taken in this game.
posted by little miss manners at 7:59 AM on June 1, 2007 [2 favorites]

Just in case there are still people reading who don't think game theory has any real world applicability, these chaps built a winning poker bot by programming it with an understanding of game theory itself, not of poker! This is quite unlike all the other poker bots, which are rely on human-programmed strategy. Quote: it "develops its strategy after performing an automated analysis of poker rules". Which is really remarkable.
posted by MetaMonkey at 9:10 AM on June 1, 2007

In your experience, is there a fervent favorite for the Two Boys problem? I can see intuitive-but-wrong answers for either of 1/2 or 1/4, but I'm not sure which one would get the heavier stumping. I'd imagine 1/2, but now I'm curious.
posted by cortex at 9:13 AM on June 1, 2007

cortex: I've only ever encountered people who get fervent about it being 1/2.
posted by little miss manners at 9:23 AM on June 1, 2007

That's a comfort, then.
posted by cortex at 9:27 AM on June 1, 2007

Of course we could just say that "rational" in game theory doesn't mean doing what is indicated by general methods deductive reasoning, but instead means following this one particular procedure for deciding on an action, but that seems unsatisfying. I'd sooner reject the premise that it's logically possible for more than one person to be both perfectly rational and perfectly selfish, though that's a little less reasonable.

There is always a problem with the Hofstadter approach, however, and that is a certain kind of circularity

I do think that circularity can be resolved. Aside from that, the result leading to \$2 still seems to depend on just the degree of "meta-reasoning" that makes it necessary to do so: "Player 1 would only make this move if it is the best possible move he can make, given that Player 2 places that bid." How does he arrive at the intermediate assumption that player 2 would place any given bid? Only by reasoning about the process of player 2's reasoning, including 2's reasoning about 1. That is just as much "meta-reasoning" as my understanding of what Hofstadter apparently came up with.
posted by sfenders at 10:10 AM on June 1, 2007

Will you guys cut out with your nerd war? You're messing with my Recent Activity page!!

Look, everything that can be said about this has been said not once but many, many times. There are more interesting puzzles in Game theory than this, lots of unresolved questions and paradoxes. Why dont you guys do something actually useful and also net yourself \$25,000 by doing the Turing Challenge or construct a perfect voting system or something.

My guess is that this type of question draws people in because its that gray area of game theory/economics which is really more about psychology than mathematics. That is, the math is at the level of what you learn after graduating from elementary school. The logic is not more complex than one of Smullyan's simpler paradoxes. So its really a psychology question. And in the fuzzy world of psychology, everyone has an opinion and everyone's opinion has some validity.
posted by vacapinta at 11:24 AM on June 1, 2007

Damn the Recent Activity page! I miss the days when I could comment late in a long thread, safe in the knowledge that almost nobody would read it.

No doubt it's all been said before, but I was thinking that perhaps it hasn't been said enough, given that a prominent practitioner of game theory first writes as though he's never heard of such considerations, then alludes to them only to outright say that it's a step that's not yet been taken.

I'm going to go read about Inconsistencies in Extensive Games: Common Knowledge Is Not the Issue in the hopes that some of it's been said in there.

"Even though optimal strategic behavior is often described as utility maximization given beliefs about other players' actions, these beliefs are usually not explicitly incorporated in game theory. Without a theory of beliefs, game theory is conceptually incomplete. ... Bicchieri (1989) presents an extension of game theory ...

Our paper concentrates on her main result, that in certain extensive finite games with perfect information, if players are rational and have common knowledge of the theory of the game, then the theory becomes inconsistent. She argues that this may account for play outside Nash equilibrium by rational players. She also claims that the theory will be consistent if players only have the minimal beliefs necessary for the reasoning behind the backward induction solution.

In this paper we refute this last claim.
"
posted by sfenders at 2:56 PM on June 1, 2007

little miss manners: Compltely offtopic, but why isn't the answer to the question about your children 1/2? I thought I remembered my probability class pretty well, and what I remember doesn't come up 1/3...

You said that if one child is a boy, what are the odds that they are both boys. Since we know that one child is male, that leaves only the other child as an unknown, and the odds there are (theoretically) 1/2, right? Likewise, if both are unknown shouldn't the answer be 1/4 (that is 1/2 X 1/2)? Where did I go wrong?

As for the problem itself, I can see how the Nash equilibrium is achieved here, but it seems more like an interesting example of how game theory (like everything else) can sometimes arrive at a perfectly "reasonable" but ultimately foolish result rather than an example of how game theory helps out in real life.

What I mean is that there's no arguing that the \$2 answer will *ALWAYS* yield a win, but its such a small win as to be meaningless, which indicates that in the real world it isn't really a useful answer.

Sure, if I can play the game an infinate number of times, the little guaranteed win will eventually pay off, but the game was defined as a play once scenario not a play many scenario.
posted by sotonohito at 5:44 PM on June 1, 2007

Consider the four possibilities, each of which has equal probability: BB, BG, GB, GG.

If one is a boy, we know GG isn't true, but that's all we know. So the conditional probability of BB is prob(BB) / prob (BB or BB or GB), or 0.25 / 0.75, or 1/3. You can prove this to yourself with the random functions of Excel if you want.

The catch is that in real life, you almost certainly know that one specific child (the one you're looking at, or the one being described) is a boy. The probability that the other one is also a boy is indeed one-half. The problem gets its purchase by being coyly indeterminate -- one child is a boy, but it doesn't say which.

Of course, the best answer to "If one of them is a boy, what are the odds I have two boys?" is that there aren't odds over realized events. The probability that little miss manners has two sons is either exactly zero or exactly one. I might not know which, but there's no probability about it.
posted by ROU_Xenophobe at 8:19 PM on June 1, 2007 [1 favorite]

ROU_Xenophobe Thanks.
posted by sotonohito at 4:50 AM on June 2, 2007

« Older iTunes Loses a Little DRM   |   Don't Be That Guy Newer »