Because Monty Hall is so last century.
November 1, 2015 6:29 AM   Subscribe

The Sleeping Beauty Problem is a problem in probability (rumored to have originated at MIT) that appears trivially simple, yet has inspired some rather sophisticated arguments.

We plan to put Beauty to sleep by chemical means, and then we'll flip a
(fair) coin. If the coin lands Heads, we will awaken Beauty on Monday
afternoon and interview her. If it lands Tails, we will awaken her Monday
afternoon, interview her, put her back to sleep, and then awaken her again
on Tuesday afternoon and interview her again.

The (each?) interview is to consist of the one question: what is your
credence now for the proposition that our coin landed Heads?

When awakened (and during the interview) Beauty will not be able to tell
which day it is, nor will she remember whether she has been awakened
before.

She knows the above details of our experiment.

What credence should she state in answer to our question?




The Monty Hall problem, most recently previously.
posted by Obscure Reference (131 comments total) 22 users marked this as a favorite
 
My reaction, on first read.

There are three different scenarios in which Beauty could be awakened.
1. Coin lands heads, awakened on Monday
2. Coin lands tails, awakened on Monday
3. Coin lands tails, awakened on Tuesday

If we imagine 100 coin flips, we expect scenario 1 to play out 50 times, and scenarios 2 and 3 to each play out 50 times. That gives us 150 awakenings, 100 of which occur after a coin was flipped tails. She should be 67% confident that the coin landed tails.
posted by enjoymoreradio at 6:40 AM on November 1, 2015 [5 favorites]


Except, there are also 100 awakenings on Monday.
posted by Thorzdad at 6:45 AM on November 1, 2015


Yeah. She can be 67% confident it's Monday, and 67% confident that the coin landed tails.
posted by motty at 6:47 AM on November 1, 2015 [8 favorites]


There's an element of psychology to the question, I suppose, because it asks her what her credence is. Either she's thinking about it from the perspective of the coin flip -- 50/50 -- or she's thinking about it from the perspective of being awake -- 2/3 of the times that she is awake it will be because of tails.
posted by jacquilynne at 6:54 AM on November 1, 2015 [2 favorites]


I like to explain it by supposing she guesses tails in every interrogation and walking through a situation where it's heads (1 wrong) and where it's tails (2 right.)
posted by michaelh at 6:54 AM on November 1, 2015 [2 favorites]


Restating this in another way:

If she awakens on a Monday, the probability of Heads was 50%
If she awakens on a Tuesday, the probability of Heads was 0%

She has a ~66% chance of awakening on a Monday, resulting from the facts mentioned above (100 awakenings out of 150 will be on a Monday).

Therefore, the probability of the coin being Heads was (0.66 * 0.5 + 0.33 * 0.0) = 0.33, or one-third.
posted by Room 101 at 6:57 AM on November 1, 2015 [4 favorites]


I was promised a goat.
posted by dr_dank at 6:59 AM on November 1, 2015 [29 favorites]


I think Metroid Baby's comment from the time Ryan North was trapped in a pit also applies here.

Final answer.
posted by bigendian at 6:59 AM on November 1, 2015 [4 favorites]


Except, there are also 100 awakenings on Monday.

Yes, 50 in Scenario 1, and 50 in Scenario 2.
posted by enjoymoreradio at 7:04 AM on November 1, 2015


One reason this is difficult is that people often think it's a math problem, when it's actually a philosophy problem.
posted by escabeche at 7:14 AM on November 1, 2015 [8 favorites]


Therefore, the probability of the coin being Heads was (0.66 * 0.5 + 0.33 * 0.0) = 0.33, or one-third.

I misunderstood the experiment at first. Once I understood it, I didn't understand why anyone would think it was confusing or even all that interesting. Then you phrased it this way, and I get it.

The probability that the coin flip was heads or tails is always 50/50. You're just putting Sleeping Beauty in a space where you're not asking her about the results in a non-biased way. To take the extreme case, if I tell you I'm going to flip one coin, and if it comes up heads I'll ask you what the result is, you should "guess" heads every time.

This is way simpler than Monty Hall IMHO, although it addresses a simple type of problem people make (ignoring the role their observations play in the results.)

One reason this is difficult is that people often think it's a math problem, when it's actually a philosophy problem.

How so? Honestly seems like simple math to me.
posted by mark k at 7:18 AM on November 1, 2015 [3 favorites]


Did Beauty get a vote in this, or was she just told the details?
posted by still_wears_a_hat at 7:37 AM on November 1, 2015 [6 favorites]


Wait, so we're only doing this for two days? The coin doesn't get tossed again on Tuesday? It's trivial in that framing.

I was promised infinite series, dammit.
posted by fifthrider at 7:43 AM on November 1, 2015 [1 favorite]


The question she is asked makes most sense to me unpacked like this:

"Given that you have just been awakened, how likely do you think it is that you exist in a universe with twice as many instances of you being awakened, compared to the only other possible universe?"
posted by howfar at 7:43 AM on November 1, 2015 [7 favorites]


previously
posted by stebulus at 7:46 AM on November 1, 2015 [2 favorites]


This seems like a setup well-suited to an agent-based model. Why has no one yet produced an agent based model?
posted by If only I had a penguin... at 7:46 AM on November 1, 2015


I personally am just baffled as to why a coin is flipped instead of using the tried-and-tested Kiss of a Handsome Prince method, why must the science complicate everything
posted by billiebee at 7:47 AM on November 1, 2015 [11 favorites]


Suppose that if the coin falls tails, Sleeping Beauty is woken up 99 times and asked the same question - and suppose we do the experiment twice, and once we get heads and once we get tails.

If SB says "tails" every time, she'll be wrong once out of 100 times she's asked the question.

In the given example, if she says "tails" every time, she'll be wrong one time in three.

Doesn't seem so hard...?
posted by lupus_yonderboy at 7:48 AM on November 1, 2015 [2 favorites]


I personally am just baffled as to why a coin is flipped instead of using the tried-and-tested Kiss of a Handsome Prince method

I can't be everywhere, man.
posted by Artful Codger at 7:53 AM on November 1, 2015 [14 favorites]


This is related to the presumptuous philosopher problem and the simulation argument. The hidden assumption that usually screws everything up is that the observer/observation is 'typical' in some sense.

Imagine you own two residences - one in Oslo, and one in Baltimore. You fly between the two all the time. For some unknown reason, you always awaken once per night when you sleep in Oslo, and twice per night when you sleep in Baltimore. One night you wake in the dark and can't immediately remember where you are, or if you've awoken already tonight. Can you conclude from the mere fact that you're awake that you are twice as likely to be in Baltimore, because so many more awakenings happen there?
posted by pixelrevolt at 7:53 AM on November 1, 2015 [11 favorites]


There is some interesting statistics here, but to my mind half the attention this is getting is simply due to its confusing wording/framing.

...which is perhaps a more interesting phenomenon: making things more interesting by making them harder to understand.
posted by ropeladder at 7:56 AM on November 1, 2015 [2 favorites]


This is an excellent explanation of the thirder position, including a response to one of the stronger halfer arguments.
posted by Proofs and Refutations at 7:57 AM on November 1, 2015 [3 favorites]


This must be a linguistic problem, not a probability problem.

If the experimenters tally up the number of times the coin question is answered correctly, SB's strategy to maximize the tally is to guess "TAILS", and the expected value of the tally is 1. The alternate strategy is to guess "HEADS" with an expected value of ½. This is the same as the "thirder" position, since 1/(1+½) is ⅔.

On the other hand, if the experimenters tally 1 in an experiment where the coin question is answered correctly every time it is asked within that experimental run and zero otherwise, any guess by SB will maximize the expected value of the tally at ½. This is the same as the "halfer" position.

It seems like since the question is about SB's credence, it's more like the first situation than the second.
posted by the antecedent of that pronoun at 8:02 AM on November 1, 2015


To answer this question, now shall you deal with me, O Prince, and all the powers of Hell.
posted by Cool Papa Bell at 8:04 AM on November 1, 2015


Well screw probability theory when you have brute force. I just did a quick simulation in excell of about 3600 weeks, resulting in 5662 coin flips. Flip a coin in one column. FLip a coin conditionally in the next column. Take the all the flips and check the combined proportion. Of course the process of doing thiss makes it obvious what the answer will be. The proportion heads was 0.50176616.

I realize that 3600, isn't the same as "infinite," but nonetheless, I hope we've put this to rest.
posted by If only I had a penguin... at 8:05 AM on November 1, 2015


Also, how the hell did this experiment get past an ethics board. This is some bullshit research design. That much we can all agree on.
posted by If only I had a penguin... at 8:07 AM on November 1, 2015 [7 favorites]


Another way to look at it is to ask, "What is the probability that today is a Tuesday?" It will be Tuesday 25% of the time with a 0% chance the coin was Heads. And if it's a Monday there is exactly a 50% chance it is Heads. Ergo, always opt for Tails...
posted by jim in austin at 8:12 AM on November 1, 2015


The 3-Monty Hall and (2,1)-Sleeping Beauty instances are not very intuitive, but you can use other parameters if you like.

1000-Monty Hall: You choose door #1. The host opens every door from 2 to 1000 inclusive, except door 817. Should you switch?

(1,0)-Sleeping Beauty: If the coin lands heads, I'm going to wake you up, once; if the coin lands tails, I'll let you sleep. I wake you up and ask: was it heads?

Now gradually change the parameters from the original; where exactly does the 50-50 argument stop working? Why?
posted by you at 8:12 AM on November 1, 2015 [8 favorites]


Well screw probability theory when you have brute force. I just did a quick simulation in excell of about 3600 weeks, resulting in 5662 coin flips. Flip a coin in one column. FLip a coin conditionally in the next column. Take the all the flips and check the combined proportion. Of course the process of doing thiss makes it obvious what the answer will be. The proportion heads was 0.50176616.

I don't think you're testing the same thing. The probability being sought isn't how often the coin comes up heads, but the probability of her being awakened. She is twice as likely to be awakened when the coin has come up tails.
posted by howfar at 8:13 AM on November 1, 2015 [6 favorites]


Since no one's put forth the Bayesian argument for 1/2, here it is: If you asked her after the coin was flipped but before she went to sleep, she'd have 50% confidence either way. When she wakes up, the only new information she has is that she woke up -- and that would have happened either way. So she should still have 50% confidence because she hasn't learned anything new. Consider also the case that the coin is loaded -- say that the coin comes up heads 90% of the time, but that she gets awakened 50 times if the coin comes up tails. SB will have high confidence that the coin came up tails, but 90% of the times you run the experiment, she'll be very confidently wrong.

Nick Bostrom (previously) makes an interesting argument that the answer could be anywhere between 1/2 and 1/3 depending on whether Sleeping Beauty is the only observer in the universe, but I don't quite understand his argument.
posted by ectabo at 8:14 AM on November 1, 2015 [5 favorites]


I too misread the problem to seem more interesting than it is. With only one flip on Monday, it seems 1/3 heads and 2/3 tails (half of Mondays and all of Tuesdays) is the clear answer.

I thought there was a coin flip and interview each day -- with tails meaning there'd be another day, another interview, and a new flip. What's the answer to *that* problem? Is it answerable despite having it a potentially infinite number of awakenings, so no concrete denominator?
posted by zittrain at 8:17 AM on November 1, 2015 [1 favorite]


I don't think you're testing the same thing. The probability being sought isn't how often the coin comes up heads, but the probability of her being awakened. She is twice as likely to be awakened when the coin has come up tails.

She's awakened with every coin flip. So Monday they flip a coin and wake her no matter what. Some tuesdays (depending on the Monday flip) they flip a coin and wake her. I mean they can't interview her without waking her. The probability that she's been wakened is always 1.0.

If there's no coin flip on Tuesdays, then yeah, that's a different story. However, it's one that could also be solved with a quick excel sheet, rather than endless arguing. I'm pretty sure that spread sheet would show 1/3 / 2/3.
posted by If only I had a penguin... at 8:28 AM on November 1, 2015


Flip a coin in one column. FLip a coin conditionally in the next column.

This is how I misread the problem at first, but there's only one coin flip. There's no conditional second flip. The only condition is her being asked to guess twice, which is conditional on the coin being tails.

If you count from 1 to 100, and every time you say an even number you smack a statistician, and every time you say an odd number you smack them twice, what is the odds for any given smack that you are on an odd number? This is the question IMO more easily framed--you are being asked about the frequency of smacks, not the proportion of odd numbers.
posted by mark k at 8:28 AM on November 1, 2015 [4 favorites]


It's also covered in the Peter Norvig thread.
posted by CheeseDigestsAll at 8:47 AM on November 1, 2015 [2 favorites]


Plane takes off.
posted by standardasparagus at 8:57 AM on November 1, 2015 [2 favorites]


Easy answer: Snap the neck of the creep whose been holding you prisoner for eternity and go on a revenge spree leaving coins in every mathematician's eyes along the bloody trail. 100% effective
posted by Potomac Avenue at 9:10 AM on November 1, 2015 [2 favorites]


Imagine you own two residences - one in Oslo, and one in Baltimore. You fly between the two all the time. For some unknown reason, you always awaken once per night when you sleep in Oslo, and twice per night when you sleep in Baltimore. One night you wake in the dark and can't immediately remember where you are, or if you've awoken already tonight. Can you conclude from the mere fact that you're awake that you are twice as likely to be in Baltimore, because so many more awakenings happen there?

Depends: whom did I have for dinner the night before?
posted by Potomac Avenue at 9:14 AM on November 1, 2015 [3 favorites]


Here's an alternate version:

I'm flipping a coin. If it lands heads, I will ask one random person about my coin toss result. If it lands tails, I will ask two random people. I flipped the coin, and now, as per the rules, I have selected you as a random person to ask: What are the chances my coin toss landed heads?
posted by aubilenon at 9:14 AM on November 1, 2015 [6 favorites]


I'm not sure what it is that I am supposed to understand by 'credence.' Is this the same as saying "What probability would you assign to our statement: 'our coin landed Heads'"?
posted by carter at 9:30 AM on November 1, 2015


I'm flipping a coin. If it lands heads, I will ask one random person about my coin toss result. If it lands tails, I will ask two random people. I flipped the coin, and now, as per the rules, I have selected you as a random person to ask: What are the chances my coin toss landed heads?

Chances for heads? 50%
Chances for tails? 50%
Chance you are the second person asked on a tails flip? 50%
Chance you are the second person asked on any flip? 25%
posted by jim in austin at 9:32 AM on November 1, 2015


I'm not sure what it is that I am supposed to understand by 'credence.'

Credence.
posted by howfar at 9:33 AM on November 1, 2015 [1 favorite]


Chances for heads? 50%
Chances for tails? 50%
Chance you are the second person asked on a tails flip? 50%


Unless you're leaving the phrase "conditional on a T" out from that last point, please explain to me the chances that I'm the first person asked on a tails flip.
posted by PMdixon at 9:40 AM on November 1, 2015


I'm flipping a coin. If it lands heads, I will ask one random person about my coin toss result. If it lands tails, I will ask two random people.

Say you have two people to ask, Alice and Bob. Alice is 100% likely to be asked if it lands tails, 50% likely to be asked if it lands heads. Before you flipped the coin, she didn't know whether you were going to ask her, so when you ask her, she learns something new about the situation, and she can be a little more confident that it landed tails.

But in the original problem, SB is equally likely to be asked either way -- she knows she's going to be asked before she goes to sleep, so she doesn't learn anything new when she wakes up. What changed in SB's knowledge between going to sleep and waking up so that before she went to sleep, she was 50% confident, and after waking up, she's 66% confident?
posted by ectabo at 9:45 AM on November 1, 2015 [1 favorite]


What exactly is this whole process trying to determine?
posted by Pirate-Bartender-Zombie-Monkey at 9:51 AM on November 1, 2015 [1 favorite]




What changed in SB's knowledge between going to sleep and waking up so that before she went to sleep, she was 50% confident, and after waking up, she's 66% confident?

There are 4 equiprobable states of the world on Sunday night: (H, T) X (Mon, Tue). She cannot observe (H, Tue) by design. Therefore if she makes an observation, it is not (H, Tue). The other three states remain equiprobable and exhaustive. QED
posted by PMdixon at 9:55 AM on November 1, 2015 [6 favorites]


Have we done Newcomb's Paradox yet?
posted by thelonius at 9:56 AM on November 1, 2015


For real though, why is this poor woman being held? Why are they drugging her? What do they hope to learn?
posted by OverlappingElvis at 9:57 AM on November 1, 2015 [1 favorite]


Have we done Newcomb's Paradox yet?

Eh, "Laplace's demon fucks with people" is kinda boring.
posted by PMdixon at 9:59 AM on November 1, 2015


Unless you're leaving the phrase "conditional on a T" out from that last point, please explain to me the chances that I'm the first person asked on a tails flip.

It is assuming you know it is tails. If that is the case, there is a 50% chance you are either first or second. If you don't know the flip, there is a 75% chance you are first and a 25% chance you are the second. This applies to the original problem of Sleeping Beauty in that the possibility of it being a Tuesday (25%) with a 100% chance of the toss having been tails skews the probability somewhat similarly to Monty's privileged information. Always opt for tails...
posted by jim in austin at 10:13 AM on November 1, 2015


What is the probability that if I am reading an explanation of the Sleeping Beauty paradox, that that explanation is incorrect?

What is the probability that if I am reading a post on the Sleeping Beauty paradox, that it is (at least) a double?

What is the probability that if a post is late in a thread, it is not serious or to be trusted?
posted by chortly at 10:22 AM on November 1, 2015


This puzzle should be called "Credence Cointosser Revival"
posted by chavenet at 10:27 AM on November 1, 2015 [17 favorites]


What is the probability that if I am reading an explanation of the Sleeping Beauty paradox, that that explanation is incorrect?

0, because by some fixed point theorem or other there exists a language in which it is correct.

What is the probability that if I am reading a post on the Sleeping Beauty paradox, that it is (at least) a double?

Depends on your prior probability on drug induced amnesia.

What is the probability that if a post is late in a thread, it is not serious or to be trusted?

Green.
posted by PMdixon at 10:28 AM on November 1, 2015


A perfect coin is supposed to land on Heads 1/2 of the time. Whether or not you've been woken up twice for that shouldn't have any bearing on how the coin landed.
posted by teponaztli at 11:06 AM on November 1, 2015


"Given that you have just been awakened, how likely do you think it is that you exist in a universe with twice as many instances of you being awakened, compared to the only other possible universe?"

This phrasing is strongly related to Nick Bostrom's argument that ectabo cited.

Basically, under normal circumstances "you" could be (a) your ordinary self in the ordinary universe, or (b) a Boltzmann brain that randomly appeared last Thursday and will disappear an instant later. Or your self in a simulation, or some other weird alternative.

Normally you don't have to think about which "you" you are because the random instances aren't biased—if I want to know whether a die rolled a 6, exactly 1/6 of the Boltzmann "me"s will roll a 6, so they don't give me any information. But Sleeping Beauty is different. She knows that 2/3 of her "selves" will exist in universes where the coin landed on Tails, because the experimenters establish a cause-and-effect relation between "flipping Tails" and "creating more instances of Sleeping Beauty."

(By the way—I said that normally the alternative "you"s aren't biased. This isn't quite the case. Imagine that I flip a coin: if it lands Heads, I design a supercomputer that lets us run Matrix-like simulations; if it lands Tails, I prevent anyone from ever designing such technology. If we do end up with sim tech, we'll probably simulate the real world, including myself in it (maybe to study history or economics or whatever). So before I flip the coin, what should be my credence that it will land Heads? Well, if it lands Tails, I am the only "myself," but if it lands Heads, there will be lots of other simulated "me"s. So I should believe that Heads is more likely, to the degree that I believe future simulated worlds will include myself. Of course this depends on being able to unilaterally determine whether or not sim tech will exist, but hey, it's a thought experiment.)
posted by Rangi at 11:08 AM on November 1, 2015


Eh, "Laplace's demon fucks with people" is kinda boring.

Newcomb's paradox doesn't depend on an omniscient predictor! They just have to be a little better than chance. You can write out the expected values for 1-boxing or 2-boxing:

E(1-box) = $high × P(predict 1-box)
E(2-box) = $low + $high × (1 − P(predict 1-box))

So you should 1-box if P(predict 1-box) > ½ + $low / $high / 2. With the usual values of $low = $1,000 and $high = $1,000,000, the Predictor only needs to be right more than 50.05% of the time. Random guessing would already by 50% accurate; don't you think a Predictor who knows you well, like a friend or family member, could eke out an extra 0.05% accuracy?

Transient global amnesia can make you "engage in conversation that recycles about every 5 minutes." I remember reading an article about a woman who had a stroke, conversing with her husband in the back of an ambulance, and her husband reported feeling spooked because even though their first conversation was not rote ot predictable, the latter ones were basically repeats of the first. "Free will" doesn't mean you're unpredictable; just that your actions have very complex influences. But if a Predictor can ask you a question, reset your mind via chemical amnesia like Sleeping Beauty, and call their knowledge a prediction—or run a simulation of you to get the same information—then you might as well treat them as if they really do know the future.
posted by Rangi at 11:38 AM on November 1, 2015 [3 favorites]


Has this answer been proposed yet?:

Put ourselves in the shoes of the interviewers, and notice that if she always guesses Tails, she is more likely to get the right answer. Therefore, her bet should be strictly > 1/2 Tails.

Doesn't this immediately eliminate certain arguments? I mean, suppose a Tail flip requires waking her up 10 days in a row versus the one day given a Heads.
posted by polymodus at 11:42 AM on November 1, 2015


The 2/3 proposition is simple to understand if Sleeping Beauty is given a dollar for every correct guess at the end of the experiment. If the coin is Heads and she guesses Heads, she can get $1 (one correct guess). If it is Tails and she guesses Tails each time, she gets $2. If she's wrong, obviously, she gets nothing. Guessing Tails is a more profitable strategy, because she can only be right twice if it's Tails.

However, the probability is always 50/50. This is confusing to people because "credence" is about guessing strategies given limited information, not about what actually happened. The number of trials increases only her chance of making a correct guess, not the underlying probability. But the coin flip is controlling the number of interviews.
posted by graymouser at 11:42 AM on November 1, 2015


I just checked peter norvig's computerized answer, which articulates the same argument using the idea of conditional probability, P(Heads | you are being interviewed).
posted by polymodus at 11:50 AM on November 1, 2015


She is not asked to guess Heads or Tails, she is asked her credence (degree of belief) that it landed on Heads. So her answer is a percentage from 0% to 100%. It is not clear how her answers are rated for correctness. Here's an argument that a natural choice of scoring rule leads to "33% chance Heads" being the correct answer.
posted by Rangi at 12:08 PM on November 1, 2015


Put me in the "this is a trivial problem that's only controversial due to unclear wording" camp.

Here's a quick-and-dirty JavaScript thing I wrote which runs a brute-force simulation. (Just click the blue "Run" button at the bottom.) Two-thirds of awakenings occur on Mondays; the remaining third occur on Tuesdays. Since Beauty knows all of the mechanics of the experiment, she should be able to predict this as well as we can.
posted by escape from the potato planet at 12:12 PM on November 1, 2015 [2 favorites]


If you're in Beauty's shoes, when you wake up, you scan the room for anything that's at the ready in order to put you under again. If you see a nurse standing by with a syringe, for instance, you can be pretty sure it's Monday and the flip went tails.

If you don't see that, you've got less information. It's either Tuesday/Tails or Monday/Heads. The good news is that you're not going to be knocked out by ethically-questionable researchers anymore, but you should still try to make it not be in vain.

So, first off, you try reeeeeeaaaaalll hard to remember if you'd maybe been interviewed yesterday. I mean, just think about it really hard, you know? Because if you can remember anything about that at all, it means you're on Tuesday, and that coin dropped Tails. Whoo-hoo!

If you can't remember anything, well, I'm sure you tried your best. Whatever. Maybe try looking out the window, if you've got one. See if it particularly looks like a Monday outside. Maybe there's a bank across the street with one of those big digital signs that flashes the date and time and stuff? Get creative. Do the interviewers look tired and a little hungover? Might be a Monday thing.

And if all else fails, just flip your own coin. Law of averages says it'll probably land the opposite way from the one the doctor's flipped, so there you go.

It's not that tough once you think about it.
posted by Navelgazer at 12:16 PM on November 1, 2015 [2 favorites]


She is not asked to guess Heads or Tails, she is asked her credence (degree of belief) that it landed on Heads. So her answer is a percentage from 0% to 100%. It is not clear how her answers are rated for correctness. Here's an argument that a natural choice of scoring rule leads to "33% chance Heads" being the correct answer.

Yes, and the whole point is if you consider the simpler question, it generates a bound that rules out 50%, since that answer is strictly greater than 1/2. It also forces any analysis to apply conditional probability correctly (i.e. identify an invariant in the problem structure).
posted by polymodus at 12:16 PM on November 1, 2015


She is not asked to guess Heads or Tails, she is asked her credence (degree of belief) that it landed on Heads.

I'll also point out that these two cases are, also, equivalent by an iteration argument. [Her credence is the limit of if you repeated an experiment over and over where she simply had to say H or T.]
posted by polymodus at 12:20 PM on November 1, 2015


I'll also point out that these two cases are, also, equivalent by an iteration argument. [Her credence is the limit of if you repeated an experiment over and over where she simply had to say H or T.]

I'm not sure that's really the case. If I'm asked to guess which side a biased coin landed on, and I know it has a 70% chance of landing on Heads, I'll guess Heads 100% of the time. Even if it's an unbiased coin, I could always answer Heads and be right as often as not.
posted by Rangi at 1:04 PM on November 1, 2015


She knows the above details of our experiment.
What credence should she state in answer to our question?


It depends if she's a halfer or a thirder. It doesn't matter what the actual percentages are.
posted by MtDewd at 2:34 PM on November 1, 2015 [1 favorite]


You know you've been reading Metafilter too long when you see a post, think "hey isn't this a double", and then someone in the thread posts the previously from eleven years ago.
posted by Justinian at 2:37 PM on November 1, 2015 [5 favorites]


The nefarious Two-Face has kidnapped Batman and is telling him all about his evil plan: he's stolen a doomsday device that will, if activated, completely erase the Earth from existence. But true to his nature, he intends to flip his famous coin: if it comes up scratched-heads, he'll push the button and destroy us all. If it comes up undamaged-heads, he'll break the machine and turn it in to the authorities.

He lets Batman appeal to his reason and/or morality, trying to find psychological tricks to convince his old friend Harvey Dent not to do this horrible thing. After a while, he gets bored and switches comics, telling Batman "do you think I'd have told you my plan if you could have any chance of stopping it? I flipped the coin 35 minutes ago."

"…being the world's greatest detective, I deduce from our continuing existence your coin came up undamaged-heads." Batman says, demonstrating cheap sarcasm as well as how being part of the system can change the probabilities away from 50/50.
posted by traveler_ at 3:16 PM on November 1, 2015 [2 favorites]


The answer is "50%" and I can prove it:

There are two ways to understand the question and only one is correct. Let's start with the incorrect one:

1. "Tallying up correct answers"
Those who said that 66% is the correct answer argued that if the experiment is repeated x times, she'd have the "correct" answer 66% if she answers "tails" (or puts greater credence in the answer "tails"). There are two problems with this: As it is phrased, the problem never states that the week is ever repeated. For all we know, the whole experiment might end on Tuesday of the first week. Secondly, if we understand that "what credence do you give to..." is just a funny way of saying "what is the probability that..." then "66%" is still the wrong answer. Sleeping Beauty is asked the probability of a heads in a coin toss. The prrobability is always 50%. If the question were "Given that you are awake now, do you think the previous toss was head or tails? You'll get 1 gold coin for every correct answer!" then "tails" would be a winning strategy. But that is a different problem.

Two-thirds of awakenings occur on Mondays; the remaining third occur on Tuesdays.

No, there is only one awakening on Monday and possibly one on Tuesday. It doesn't say that the experiment is repeated. Correct answers are not tallied up.

2. "The day of reckoning"
In this scenario, correct answers are not tallied up. Rather, on a certain given day during the experiment (which, as we have seen, might only last two days, although that doesn't matter), Sleeping Beauty's answer matters. That day is the "day of reckoning". The answers on the other days don't matter (note that it doesn't say anywhere that they matter). To make it more interesting, let's say she'll be free to go if she answers correctly on the day of reckoning and is permanently put to sleep if she gets it wrong. And let's rephrase the question like this: On the previous coin toss, was the result "head" or "tails"?
Since the correct answer is: 50% probability for either one, which is a bit ant-climatic (no winning strategy there), let's walk through it with slightly different parameters: Let's say the coin is loaded with a 60% chance of heads and only 40% for tails. And let's say that Sleeping Beauty is awakened on the first day of every year for the coin toss. In the case of heads she is put to sleep for the rest of the year. In the case of tails, she is awoken every day until the next year (next coin toss). And let's say that the day of reckoning is a (random) day of the 100th year. So, if the coin toss on Jan. 1 is "heads" then that's the day of reckoning, otherwise it's a random day of the year. Sleeping Beauty knows that. So when she hears the question "what was the coin toss on Jan. 1 of this year?" what is the winning strategy? Keep in mind that the "day of reckoning" will by definition be the last day of the year (that matters).
Obviously the correct answer in this scenario is "heads" (60% chance of survival). In other words, the result of the coin toss in the 100th year is not affected by events occurring after that (i.e. how often she woke up). And as you can easily see, it doesn't have to be the 100th year, but "the day of reckoning" could be any day of any year in the future.

This is the problem slightly rephrased, but it is equivalent to that stated in the question: no tallying of right answers, the only relevant question is "what is the probability of a given random coin toss." The answer is independent on the number of times she has been awoken before.
posted by sour cream at 5:21 PM on November 1, 2015 [1 favorite]


Sour cream: The arguments about multiple weeks aren't because we don't get that there is only one waking, but because the idea of probability is defined by infinite trials. To say that there's a 50% chance of something happening (As you say) is identical to saying that with an infinite number of trials, this would happen 50% of the time. The idea isn't that it would be done next week, and the week after, and the week after, but something close to "imagine a universe where this moment repeats infinite times."
posted by If only I had a penguin... at 5:39 PM on November 1, 2015 [2 favorites]


I feel that Sleeping Beauty knowing how the experiment is run is what will determine her answers.
posted by Aya Hirano on the Astral Plane at 5:53 PM on November 1, 2015


If only I had a penguin...: the idea of probability is defined by infinite trials

Not necessarily, that depends on whether you're a frequentist or a Bayesian.
posted by biogeo at 6:23 PM on November 1, 2015 [2 favorites]


Also, the discussion in the main link makes it clear that there are frequentist and Bayesian arguments for both the "halfer" and "thirder" positions, so that distinction isn't enough to resolve the issue.

Great problem and link, I changed my mind about half a dozen times while reading it. It makes me doubt whether I have a coherent notion of probability.
posted by biogeo at 6:29 PM on November 1, 2015 [1 favorite]


This is the problem slightly rephrased, but it is equivalent to that stated in the question: no tallying of right answers, the only relevant question is "what is the probability of a given random coin toss." The answer is independent on the number of times she has been awoken before.

If you want a one event version of the reasoning, imagine I tell you and two friends that I'm going to flip a coin once. I'll then offer one of you a bet (if it's heads) or two of you a bet (if it's tails.) You'll share the winnings, though obviously you can't talk after I've made the offer. What's the best betting strategy? Hopefully you'd agree tails--I'm basically giving you & your cronies 2:1 odds.

This is really the exact same thing as any bias in sampling--if I do a poll and score each Republican twice my results will be skewed.

There are two key parts I think that might confuse:

- You think because it's a "fair coin" the odds have to be fifty-fifty. But I am a "biased bettor." You are really wagering not on the odds that the coin landed heads, but on the odds that my question referring to a recent heads or tails toss. And you know I ask about tails twice as often.

- If you are thinking about this prospectively--what would SB bet on Friday, before the toss--you think it should be consistent. That's correct, but when you figure out the best betting strategy Friday night remember you are locked in to two bets and on heads you only get one. Again SB is *not* really being asked to predict the coin flip on Friday night, she's being asked to predict what the right answer to some biased questions are on Monday morning.
posted by mark k at 6:31 PM on November 1, 2015 [2 favorites]


Don't we have to factor in what the probability is that Sleeping Beauty's education included advanced statistics training? Because if she spent her whole childhood staring into mirrors, singing at bluebirds, and cleaning for dwarves, then chances are her answer to the question will be something like, "Huh? What's a coin?" or best case scenario, "Um, I think it's 50/50?"
posted by lollusc at 6:55 PM on November 1, 2015 [2 favorites]


Not necessarily, that depends on whether you're a frequentist or a Bayesian.

This is moving a little outside my knowledge, so I would be willing (and interested) to hear about how I'm wrong, but I thought frequentism vs. bayesianism is about how we make inferences about probability from limited observational data, not about the nature of what probability IS.
posted by If only I had a penguin... at 7:29 PM on November 1, 2015


The Bayesian analysis is flawed: yes, the probability of heads or tails is always 50% either way, but the issue is that when tails is flipped, SB is asked the question twice, a factor that I don't believe the Bayesian argument takes into account. If she were to receive a dollar each time she was asked the question, how much money would Bayesian analysis predict she has after each trial?
posted by jamincan at 7:38 PM on November 1, 2015


Yeah, the point of thinking about this being repeated for many weeks is to bring in a frequentist sense of intuition about what the probabilities would mean. And although it isn't super-clear, the way the problem is posed builds “given that you are awake now” directly into Sleeping Beauty's credence.

Bayesianism is an alternative approach to (or even philosophy of) probability, contrasted with frequentism, that usually is closer to human intuitive thoughts about the subject. But, it's also vulnerable to oversights and apparent paradoxes where people don't correctly bring things about a scenario into the math—which is what the linked thread has going on with people who aren't sure how to treat Sleeping Beauty's knowledge of being awake as new information that updates her belief of the coin's probability. (Biogeo, that's a long thread; where did you find a good frequentist argument for 50/50?)

If only I had a penguin...: …not about the nature of what probability IS.

In practice the difference between the two is often just differences in practice. But at the foundation they do have very different ideas about the meaning of “probability”: Bayesians are ok with using it to describe degrees of belief, while frequentists don't allow that. I once mentioned using Bayesianism to a group of ecological statisticians and they almost threw their pens at me. And these days I mostly agree: while I often won't shut up about reasoning with partial information and beliefs that come in degrees rather than strict True and False propositions, I feel that's a different type of thing than probability, albeit one where the math can be similar, and conflating them leads to serious flaws in formal reasoning systems.

Oh, and mark k's description of being a “biased bettor” reminded me of this xkcd on the difference between the philosophies.
posted by traveler_ at 8:02 PM on November 1, 2015 [2 favorites]


imagine I tell you and two friends that I'm going to flip a coin once.

I think the version with more people dodges the real paradox here. Say you flip the coin Sunday morning and immediately ask Beauty how confident she is that it came up tails -- obviously she says 50%. But then you ask her "On Monday morning, how confident will you be that it came up tails?" If she's a thirder, she has to say 66%.

That's really bizarre and it doesn't happen in some of the variants -- if you flip a coin Sunday and ask either one or two out of three people on Monday, then on Sunday evening, all they know is that on Tuesday, they'll have a 66% confidence if you asked them and a 33% confidence if you didn't. They won't be sure until either you ask them or you don't, but Beauty is sure about her future confidence immediately.

That's the thing that makes this problem stick in my head -- I agree completely that if you run the experiment many times, then about 2/3s of the time you ask the question, the right answer is tails, but there's something really weird about the idea that she knows that tomorrow, she'll answer 66%, but today, her answer is 50%.
posted by ectabo at 8:10 PM on November 1, 2015


there's something really weird about the idea that she knows that tomorrow, she'll answer 66%, but today, her answer is 50%.

Why? There's all kinds of cases where we believe (correctly) that our future beliefs will be wrong or different from our present beliefs. Say I almost never have bacon for breakfast, but I did this morning, and further assume I maintain no special breakfast records. If you ask me "What likelihood will you give for having had bacon on Nov 1 2015, when asked a year from now?", the correct answer is "small," even though at this moment I will (correctly) say that it happened with near certainty. This isn't meant to be an exact analog, just an example that should be plausible in which I expect to have different beliefs than I do now.
posted by PMdixon at 8:25 PM on November 1, 2015


I think a lot of the confusion about this problem stems from ideas of personal identity, not probability. So consider a different experiment: We get two people, Sleeping Beauty and Snow White, and then flip a fair coin. If it lands Heads, we ask one of them (chosen at random) their credence that the coin landed Heads; if it lands Tails, we ask both of them.

In this experiment, it should be more clear that 1/3 is the correct answer. If not, imagine it with a hundred and one dalmatians instead (then the answer would be 1/102). Any person (or dog) who is asked the question knows that they have a 1 in N chance of being asked at all if the coin landed Heads, but would certainly be asked if it landed Tails.

there's something really weird about the idea that she knows that tomorrow, she'll answer 66%, but today, her answer is 50%.

The multi-person experiment also avoids this apparent paradox. Say that the coin landed Heads, and we only asked Snow White her credence. She would answer 1/3. Then when it's over, we interview both her and Sleeping Beauty (and they each know that they're both being interviewed) and ask for their credence now. They'll each answer 1/2. The weirdness of the original comes from the way multiple instances of Sleeping Beauty collapse into one after the experimenters stop erasing her memory.
posted by Rangi at 9:20 PM on November 1, 2015 [6 favorites]


but there's something really weird about the idea that she knows that tomorrow, she'll answer 66%, but today, her answer is 50%.

But she has an additional piece of information: what day it is. Just like in the Monty Haul problem where if you ask the contestant what her confidence is that if she changes to a new door she will have the grand prize she will answer 1/3, but if you then reveal what is behind one of the doors and repeat the question she will answer 2/3.

Each additional piece of relevant information changes the problem.
posted by Justinian at 10:34 PM on November 1, 2015


But she has an additional piece of information: what day it is.

No she doesn't. If that were the case, she would answer 50% on Monday no matter which face the coin landed on, and then if she's asked again on Tuesday she'll know for certain that it landed on Heads.
posted by Rangi at 10:49 PM on November 1, 2015


Ah, I misread the problem.
posted by Justinian at 10:55 PM on November 1, 2015


"We will give you a dollar for each right answer," is pretty simple to reason about, but attaching an incentive changes the experiment. For instance, you could change the incentive to something like, "You must answer right at least once or at the end of the experiment you will go to prison." Any single-answer strategy will send her to jail at 50% probability, but she can do a bit better if she chooses heads as often as she can statistically justify, because she only has one chance to get it right in the heads case. Roughly, she does better by choosing tails at 1/2 for a two-night tails case; 1/n for an n-night tails case. We've totally changed her answer by changing the condition, but neither condition can change anything about what happened to the coin.

The original question sidesteps all of this by specifically asking "What should she offer as the probability that the coin came up heads?" Since the question is structured to never give her new information, her answer shouldn't change. ("I am awake" is never new information because it is part of the premise of the experiment.) Foreknowledge of the experimental condition can't change her answer either, because it's explicitly foreknowledge; that would mean that her foreknowledge of the experimental condition dictates that on Sunday she predicts that the coin will come up tails with 2/3 probability, which is a contradiction, because she knows the coin is fair.
posted by WCWedin at 12:05 AM on November 2, 2015


But then you ask her "On Monday morning, how confident will you be that it came up tails?" If she's a thirder, she has to say 66%.

But this is problematic. If you ask her this, you are giving her more information than she receives in the scenario as stated (that she will be awoken on Monday) and her answer should be 100%, because you've revealed that you've rigged the experiment.

You can't, without cheating, ask the question you want to ask on Sunday night, because that question is not equivalent to asking SB her credence on an unknown day.

On Sunday night, you (the experimenter) do not have the information of whether SB will be awoken once or twice. The fact that your question doesn't make sense without this information seems strongly supportive of the view that there is new information imparted by waking SB up.

Even as thought experimenters, we are bound by the inherent uncertainty about when the question will be asked.
posted by howfar at 1:33 AM on November 2, 2015


I never know whether these things are interesting or not. I feel like I could write down different Bayesian models that would provide different answers depending on how I parsed the question. To me the most natural parsing is that SB is asked to provide her personal probability (i.e. Bayesian belief) for the proposition that this specific coin flip resulted in a heads (H), conditional on SBs knowledge of the experimental design (D) and on the fact that SB is now awake (A).

My assumption is that SB should assume that the design of the experiment has no causal/physical relation to the outcome of the coin flip, so under those assumptions Bayes' rule gives:

P(H | A,D) \propto P(A | H,D) P(H)

The prior probability that my SB would place on the coin coming up heads assumes a fair coin, so the prior is P(H) = 0.5. It is the likelihood that is tricky. What should SB assume about the conditional probability that she is awake (at an unknown time) given that the coin has come up heads (or tails) and given the experimental design itself?

Assigning exact probabilities there seems tricky, but I think it's not unreasonable for her to assume that the probability of being awake if the flip came up tails P(A | T,D) is double what it would be if the coin came up heads P(A | H,D). Putting all that together means that her posterior probability for heads P(H | D,A) is one-third.

So I'm in the 1/3 camp, if that is the intended parsing of the question.

Where I get frustrated at the SB problem and related problems (Monty Hall, I'm glaring at you) is that they deliberately(?) juxtapose a physical process about which it is most natural to think in frequentist terms and repeated events (coin flips are easier to think about in terms of long run frequency) with an inferential scenario in which it is most natural to talk in terms of beliefs about a one off event (SB needs to say something about what she believes just happened in this experiment, not across lots of hypothetical repeats). This is then (in some formulations of the SB problem) wrapped up in an ambiguous phrasing that asks something like "what is the probability that the coin came up heads?" Well, are you asking SB about her beliefs about the specific event that just happened (in which case, she should report P(H | D,A), or are you asking her to make an assertion about the physical properties of the coin, which are not affected by the biased sampling in this experiment and hence she should simply report her prior P(H), and under that interpretation I would say that 0.5 is the right answer.

Just in case that wasn't confusing enough, the SB problem then adds this weird "wake up and forget" aspect to the problem that means that SB has to construct a likelihood function for her current wakefulness that accounts for the fact that her previous waking states (if they existed) have been censored from her data set. Is it really that surprising then that if you ask a bizarre question people struggle to work out what the answer should be?
posted by langtonsant at 1:34 AM on November 2, 2015 [1 favorite]


So the thirder position weights evenly the three propositions:

A: It is Monday and the coin landed Heads.
B: It is Monday and the coin landed Tails.
C: It is Tuesday and the coin landed Tails.

But as I understand it, the halfer position says that there is a 50% chance of only A happening, and a 50% chance of both B and C happening. A has half the probability space due to the coin flip.

Stepping back, the naïve version of the thirder argument goes: if the trial is run a million times, there will be about 1.5 million interviews, and of these 1 million will be run with Tails having been the result. And there is a certain logic to that. However, to understand why it's not as obvious as it may seem, in half the trials (but not half the interviews), Sleeping Beauty will be wrong.

The thirder argument relies on an equal probability of A, B, and C. But the probability of A given Heads is 1. The probability of B given Tails is 1/2. The probability of C given Tails is 1/2. Since the probability of Heads and Tails are both 1/2, the actual probability of A is 1/2, the actual probability of B is 1/4, and the actual probability of C is 1/4. Information about the trial doesn't change that.
posted by graymouser at 1:55 AM on November 2, 2015 [4 favorites]


But this is problematic. If you ask her this, you are giving her more information than she receives in the scenario as stated (that she will be awoken on Monday) and her answer should be 100%, because you've revealed that you've rigged the experiment.

I'm wrong here. I was confusing Monday and Tuesday. But you are still giving her additional information (that it is Monday - she does not know this during the interview, only that, if it is Tuesday, she has also been interviewed on Monday - without knowing that it is Tuesday of course).
posted by howfar at 2:28 AM on November 2, 2015


Dear Sire, if you ask me about the probability of a random coin flip occurring next week, my answer is 50% each for head and tails. If you wake me up again next week and then ask me the same question about that planned coin flip right after it has occurred, then I will still answer 50% since, from my perspective, the fact that this random event now lies in the past does not affect the chances of its outcome. And if you ask me again on the next day, my answer will still be the same, since the fact that you asked me the question before does not change the probabilities involved. The correct answer is still: 50% for a random coin flip.

Now, if you choose to ask me to guess whether you threw head or tails in you last coin flip, rewarding me for each correct answer, and you ask me more times in the case of tails, then I will maximize my earnings by choosing tails.

But, dear Sire, you need to make up your mind and ask your question more precisely: Are you asking me to make a guess about the result of your last coin flip under the stated conditions (in which case, my winning strategy will be to guess "tails" every time)? Or are you asking me to predict the probability of a coin toss, which is always 50%, even if you keep repeating the question 100 times after you have tossed the coin and you know the result (but I don't).

Depending on what you have in mind, the answer 50% or the answer 66% is going to be "correct". But if you don't give me more information on how you will judge the "correctness" of my answer, this puzzle is unresolvable.

Yours truly,

Sleeping Beauty
posted by sour cream at 2:52 AM on November 2, 2015 [1 favorite]


The thirder argument relies on an equal probability of A, B, and C. But the probability of A given Heads is 1. The probability of B given Tails is 1/2. The probability of C given Tails is 1/2. Since the probability of Heads and Tails are both 1/2, the actual probability of A is 1/2, the actual probability of B is 1/4, and the actual probability of C is 1/4. Information about the trial doesn't change that.

I don't think this is right. If you get tails, both B and C happen 100% of the time. Hence the probability of it being either Monday or Tuesday is 1.
posted by howfar at 3:01 AM on November 2, 2015


The probability that the coin landed heads is either 0 or 1.

If I'm Sleeping Beauty and you're waking me up you claim because of a coin flip and asking what I think about the result of a coin flip while even keeping me in the dark about what day it is and whether I have been woken before, my answer is "There are four lights."

If you are telling the truth then, given that I have been woken up, two thirds of possible cases in which I have been woken up had a head. Or maybe a tail. I don't care - I just want to go back to sleep.
posted by Francis at 3:01 AM on November 2, 2015


I don't think this is right. If you get tails, both B and C happen 100% of the time. Hence the probability of it being either Monday or Tuesday is 1.

No, B says "It is Monday" and C says "It is Tuesday." Both can't be true at once. The point being that a given guess has a 50% chance of being correct, since A will be true 50% of the time, B will be true 25% of the time, and C will be true 25% of the time.
posted by graymouser at 3:28 AM on November 2, 2015 [1 favorite]


graymouser, you have to remember that there is really only one coin toss happening here. It just happens that SB is being questioned twice for tosses resulting in tails, and only once for tosses resulting in heads. The possible scenarios look like this:

Heads tossed
-> SB Questioned (A)
Tails tossed
-> SB Questioned (B)
-> SB Questioned (C)

Now, as an outside observer, it is easy to see that the probability of heads or tails is independent of SB; however, SB isn't an independent observer. The question she is asked, is, at this point in time, what does she believe the coin toss was? She has no way to determine if she is being questioned at point A, B, or C; the scenario makes it impossible to distinguish them, nor does she know the result of the toss. Therefore, for her, each time she is questioned is an equally probable scenario. Therefore P(A) = P(B) = P(C). Knowing that, it is easy to see that tails is twice as likely to be correct as heads as it is the correct answer for scenarios B and C.

As an outside observer, this is unintuitive, but only because the question changes for outside observers. The question is no longer, what confidence do you have that heads or tails was tossed, but what confidence do you have that SB was awaken after tails being flipped and what confidence do you have she was awaken after heads being flipped. The key event here isn't the coin toss, the key event is the awakening and questioning. There are two questionings for each tail and one questioning for each head. This means that although there is equal probability of heads or tails, you have twice the probability of being woken under tails than you do under heads.
posted by jamincan at 5:53 AM on November 2, 2015


Here's the way of thinking about it that makes it clearest to me:

She wakes up and wonders whether it's Monday or Tuesday. Which is more likely? She realizes it's twice as likely that it's Monday since there will be a Monday waking in every trial and a Tuesday waking in only half the trials. If this is Monday, there's a 50% probability the coin was tails. If it's Tuesday, there's a 100% probability - but Tuesday wakings only happen half the time. So the probability of Monday tails is exactly the same as the probability of Tuesday tails. That means that on Monday there's a 50-50 chance the coin was tails (or heads) and on Tuesday there's also a 50-50 chance the coin was tails (or heads.)

You might be thinking she can eliminate the Tuesday heads possibility because she knows she isn't experiencing that one. But she can't eliminate it. She doesn't know whether she's not experiencing it because it didn't happen or because it's still Monday. Since it could be Monday (in fact, it probably is Monday, given the odds), she has no information at all about what happened or is going to happen on Tuesday.
posted by Redstart at 5:56 AM on November 2, 2015 [2 favorites]


There's only one coin toss prior to waking her on Monday. If tails is tossed, she is awoken on Monday and reawoken on Tuesday 100% of the time. Whatever the coin toss was on Monday stands for Tuesday as well.
posted by jamincan at 5:59 AM on November 2, 2015


There's been an overthrow in cosmology, and the Big Crunch is back on the table. Even more remarkably, all of our best models agree that if the Big Crunch is indeed a physical reality, the history of the universe plays out exactly the same way each time, over and over, forever. No one's quite sure what to believe, though: whether we Crunch or just suffer heat death depends very sensitively on physical properties of the universe that we may never be able to measure with sufficient precision; our best measurements tell us that the universe is balancing on a knife's edge: the Big Crunch is about 50% likely.

A researcher proposes an experiment: He has designed a device that will cause a solar system–level cataclysm in a forever-expanding universe, and frequentists have convinced him that it's probably safe to run the experiment, so he starts to build the device.

New research comes out that puts the odds of a Big Crunch scenario way down from half to one in a million. Frequentists point out that infinity is way bigger than a million, so the experiment is still safe. Bayesians and reasonable people everywhere band together to jail the researcher, destroy his device and schematics, and scold the frequentists.
posted by WCWedin at 6:17 AM on November 2, 2015


No, B says "It is Monday" and C says "It is Tuesday." Both can't be true at once. The point being that a given guess has a 50% chance of being correct, since A will be true 50% of the time, B will be true 25% of the time, and C will be true 25% of the time.

A, B and C happen with identical frequencies--specifically, each is expected to happen 50% of the times an experiment is run.
posted by mark k at 6:40 AM on November 2, 2015


No one's quite sure what to believe, though: whether we Crunch or just suffer heat death depends very sensitively on physical properties of the universe that we may never be able to measure with sufficient precision; our best measurements tell us that the universe is balancing on a knife's edge: the Big Crunch is about 50% likely.

I'm walking at the moment but I'm pretty sure this attempted variant is begging the question. If we have already determined (by some outside means) that it's 50/50, we are not in the same position as SB, who has been asked to determine this very probability.
posted by howfar at 6:57 AM on November 2, 2015


It's a bit like positing "SB has looked out the window and noticed (some day dependent variable)..."
posted by howfar at 6:58 AM on November 2, 2015


The scenario confuses me, all I can think of is that it'd be useful to tabulate interview attempts by stabbing myself in the arm and HEY WHY ARE YOU GUYS DRUGGING ME GIVE ME BACK MY KNIFE
posted by Theta States at 7:00 AM on November 2, 2015


The thing that I can't get past is that before the experiment, SB should have a 50% credence that the coin landed Heads. My thought is that the attempt to look at this repeated "a million times," like statistics often does, in this case overlooks the independence of each trial.

When the coin is flipped, there is a 50% chance that it is Heads and she will be woken once, and a 50% chance that it will be Tails and she is woken twice. Sleeping Beauty has her pre-experiment credence of 1/2 with full rationality. Let me demonstrate why I think it's an issue.

Alice is watching on Monday. She sees Sleeping Beauty being awakened, and doesn't know the result of the coin flip, but knows that it's Monday, and knows the rules. She has a 1/2 credence that the coin is Heads. Before the experiment, Alice's credence was 1/2, just like Sleeping Beauty's. I don't think there is a thirder explanation for the difference.
posted by graymouser at 7:28 AM on November 2, 2015


I don't think there is a thirder explanation for the difference.

The explanation is, I think, that you have prevented Alice from having any knowledge of what happens on Tuesday. SB, on any awakening, may (from her perspective) have knowledge of what happens on Tuesday, because (as far as she knows) it may actually be Tuesday. A, on the other hand, does not have any means by which she can gather information about Tuesday. It seems to me like A doesn't have more relevant information than SB, she has less - the knowledge that it could be Tuesday is relevant.

Fuck I think I'm turning into Donald Rumsfeld.
posted by howfar at 7:45 AM on November 2, 2015


It's a bit like positing "SB has looked out the window and noticed (some day dependent variable)..."

I would say it's more like "SB is woken up and has no new information at that time, but she does have an understanding of the experiment and a sound model for how a coin flip works." The physical model of the whole universe based on principles derived from previous experiments is equivalent to flipping the coin a lot beforehand and building a model that the coin is fair. Both SB and the researcher are extrapolating from abstract models; both are equally ignorant of the results of past trials (if there were any); both will change their predictions based upon the type and severity of the incentives and the number of trials in the experiment – without having to or being able to update the predictive model. That last bit is what raises alarm bells for me.

This line of reasoning (probably?) depends upon an anthropic assumption: the probabilities of "I am awake" and "the universe I inhabit exists" are always exactly 1. I don't think that's a big ask, though.
posted by WCWedin at 7:46 AM on November 2, 2015


The physical model of the whole universe based on principles derived from previous experiments is equivalent to flipping the coin a lot beforehand and building a model that the coin is fair.

But we can only build the model if we know what kind of rules the universe we live in follows. That is what SB is being asked to predict.
posted by howfar at 7:53 AM on November 2, 2015


> I don't think there is a thirder explanation for the difference.

In your example, Alice's answer is conditioned upon it being Monday, whereas SB's answer is conditioned upon her awake. Those carry different information.
posted by Westringia F. at 8:02 AM on November 2, 2015


That is what SB is being asked to predict.

I agree. From there, I think we depart in that I believe that the experimental design doesn't give SB any useful information about the coin flip. Starting with a useful strategy for maximizing outcomes and backtracking to a credence is probably not sound. Or do we not disagree? I have to confess I've lost track.
posted by WCWedin at 8:22 AM on November 2, 2015


In your example, Alice's answer is conditioned upon it being Monday, whereas SB's answer is conditioned upon her awake. Those carry different information.

That's not correct. SB is awake both in the original and in the Alice modification; Alice has both pieces of information. The problem is that SB and Alice both start out from a pre-experiment credence of 1/2. Alice learns that it's Monday; SB doesn't. Both learn that SB is awake. But Alice's information remains at the 1/2 credence that it was before, while Alice's changes.

It gets weirder. If Alice doesn't learn what the outcome of the experiment is, and is asked about it on Wednesday, her credence is 1/2. If SB gets the amnesia drug again and is woken up on Wednesday, and not told how many times she was woken up or what the result was, her credence is also 1/2. So the thirder position goes from a straightforward assertion to a weird metaphysical statement that being in the experiment temporarily changes SB's credence, while Alice's credence doesn't ever change. This is an indication that there is a problem in the way that you are calculating the probabilities.

In the external world, the probability of Heads was always 1/2. It never moves. What the thirder position does is to show an optimal betting strategy if you were betting on each individual awakening; since you get more awakenings on Tails, it wins the bet in 2/3 of the awakenings. But it's a mistake to make this into a rational credence, because for the whole trial, P=1/2.
posted by graymouser at 8:30 AM on November 2, 2015 [1 favorite]


In the external world, the probability of Heads was always 1/2. It never moves.

But nobody disputes this. I'm confused as to what position you are opposing.
posted by howfar at 8:38 AM on November 2, 2015 [1 favorite]


Put another way, if we insist that SB can give us the right answer if we just pay her enough and we assume that all actors are rational, given that SB is functionally an automaton each morning and that always-tails is the objectively best strategy, you can skip all the befuddling sleep drug procedures and cut to the chase. The dollar-per-right-answer scenario reduces to a simple casino game: The doctor will pay SB $2 if a coin comes up tails. SB is ecstatic that she doesn't have to go under, and she's even willing to pay $1 to play the game. After taking out all of the cruft, it's easier to see that conflating the payout with the probability is plainly wrong.
posted by WCWedin at 8:53 AM on November 2, 2015


This is an indication that there is a problem in the way that you are calculating the probabilities.

No, it means you're taking the sample space of interview outcomes, not of coin flips. Clearly if the setup was such that heads meant no interview at all, everyone would agree that the correct answer is 100% tails, right? Why is this any different?
posted by PMdixon at 9:34 AM on November 2, 2015


No, it means you're taking the sample space of interview outcomes, not of coin flips. Clearly if the setup was such that heads meant no interview at all, everyone would agree that the correct answer is 100% tails, right? Why is this any different?

It's obviously different. In the "no interview" scenario, the outside observer Alice also knows what happened as soon as SB is interviewed or not interviewed. That is, the mere existence of an interview indicates that Tails occurred, both inside the system and outside of it.

You still can't explain the change in SB's credence but the non-change in Alice's credence on Monday. Where does SB's information come from? On Sunday, their information is the same, and they both have a credence of 1/2. On Monday, Alice has all the information SB has, plus the information that it is Monday, but the thirder claims that SB's credence moves to 1/3 while Alice's stays at 1/2. On Wednesday (given that Alice has no access to the experiment on Tuesday and SB has amnesia drugs), SB's credence returns to 1/2, which has been Alice's credence all along.

This can't happen, because the only thing that SB learns during the experiment is that she has been interviewed at least once, which she also knows after the experiment, and which Alice also knows.
posted by graymouser at 10:22 AM on November 2, 2015


(N.B.: in the modified hypothetical, SB can't see Alice to determine that it is Monday.)
posted by graymouser at 10:24 AM on November 2, 2015


Where does SB's information come from?

From the fact that she doesn't know it's Monday. Suppose there was no amnesia drug, and everything else remained the same. Then her credence would be 50/50 on Monday, and 0/100 on Tuesday (assuming a Tuesday interview happened). So on average, per run of the experiment, she will have .5 interviews after a heads and 1 after a tails, and her credences will sum up to this on average: On Monday she will have one interview contributing half credence to each, and on Tuesday there's a 50% chance in which she contributes full credence. But this is because she knows what day it is. So, she knows whether or not she has information yet.

With the amnesia, she no longer knows that. But she still knows she is going to get it (make the observation or not). So therefore the credences must be spread out equally among each interview.

Or, again, most straightforwardly: She has equiprobable chance per interview of being in the states (H, Mon), (T, Mon), (T, Tue).
posted by PMdixon at 10:33 AM on November 2, 2015 [2 favorites]


With the amnesia, she no longer knows that. But she still knows she is going to get it (make the observation or not). So therefore the credences must be spread out equally among each interview.

Or, again, most straightforwardly: She has equiprobable chance per interview of being in the states (H, Mon), (T, Mon), (T, Tue).


Does she have credence 1/2 again on Wednesday? If so, her basing her credences on the fact of the interview is wrong.
posted by graymouser at 10:39 AM on November 2, 2015


Does she have credence 1/2 again on Wednesday?

Yes.

If so, her basing her credences on the fact of the interview is wrong.

No. Based on her knowledge of the experiment, Wednesday is unaffected by the outcome of the experiment. Monday and Tuesday, however, were: They were rendered indistinguishable, and she understood that she would experience more days under tails than heads.

Consider the following urn problem:

I flip a fair coin, unseen by you. If it's heads, I put a black and a white ball in the urn. Tails, 2 white balls. I draw a white ball.

What's the probability I flipped tails? If you're going to tell me that's 50/50 then please let me arrange a card game some time.
posted by PMdixon at 10:48 AM on November 2, 2015 [1 favorite]


No. Based on her knowledge of the experiment, Wednesday is unaffected by the outcome of the experiment. Monday and Tuesday, however, were: They were rendered indistinguishable, and she understood that she would experience more days under tails than heads.

She knew on Sunday, and knows again on Wednesday, that at least one interview occurred. If her credence on Sunday was 1/2, and her credence on Wednesday is 1/2, then her credence on Monday and Tuesday is still 1/2.

If we modify the interview so that every time Sleeping Beauty is interviewed, she guesses Heads or Tails, the thirder strategy says she should answer Tails. Right? After all, she has a 2/3 credence of Tails based on the 1/3 solution. But she only has a 50% chance of being right. If the coin flip is Heads, she will give the wrong answer, but give it once. If the coin flip is Tails, she will give the right answer, but give it twice. For the whole experiment, there is a 50% chance she gives the wrong answer once, and a 50% chance she gives the right answer twice.

It's a really interesting problem when you realize why the thirder solution is not obviously correct: the problem, treated as an aggregate of interviews, tells you nothing about whether or not SB is correct in a given instance. I do agree that the 1/3 assumption would be better if you were betting on each individual interview, but it's a faulty inference when it comes to actual truth. In an objective world where this was carried out a thousand times, guesses of Tails would be wrong on 1/2 of all trials, but on 1/3 of all interviews.
posted by graymouser at 11:11 AM on November 2, 2015 [1 favorite]


Okay, having slept on it, I think I've decided I like the SB problem. It seems to me that it highlights the fact that a biased experimental design (asking SB twice on tails, once on heads) leaks information to about the outcomes (coin flips) without providing any insight into the generative process underpinning them (the coin). An analogous problem might be the "shoddy clairvoyance study" (SCS):
The experimenter (Alice) sits in one room flipping a fair coin. Once per minute, every minute, Alice asks the participant (Bob) - who is in another room and cannot see what happened - "what is the probability that the last coin flip came up heads or tails?". However, Alice does not necessarily flip the coin every minute. If her most recent coin flip comes up heads then she waits one minute before flipping again, but if it came up tails she waits two minutes. How should Bob - who knows perfectly well that Alice is doing this - answer the questions?
Structurally, this seems similar to the SB problem (to me, at least!) in that the experimental design has a bias even though the coin does not: the coin spends twice as long in the tails state than in the heads state even though the coin is fair, and as a consequence Bob gets asked about the coin twice as often when the state is tails as when the state is heads. A representative set of outcomes to the SCS might look like this:
    is the coin flipped?       y n y y y y n y n    
    what is the flip outcome?  T . H H H T . T . 
    what is the coin state?    T T H H H T T T T 
    is Bob queried?            y y y y y y y y y
The coin is flipped 6 times during the 9 minute procedure, yielding 3 heads and 3 tails. As a consequence the coin remains in the tails state for 6 minutes and in the heads state for 3 minutes. Bob is, of course, queried at all 9 time points. As in the SB problem, Bob knows that information is leaking to him about the coin state due to a biased experimental design and so if he parses Alice's questions as "please tell me the current state of the coin" he is justified in answering "it's probably tails (2/3 chance)" but if he parses the question as "please tell me the Bernoulli rate associated with the most recent flip", he is justified in answering "the probability of heads on the last flip was 1/2".

Arguably, the most justifiable answer that Bob (or SB for that matter) could give is "fuck you too".
posted by langtonsant at 12:10 PM on November 2, 2015 [5 favorites]


What I wish we could do is set up a texas hold'em tournament with two teams, halfers & thirders.

Anyone want to guess which team I'd choose to be on?
posted by lastobelus at 12:11 PM on November 2, 2015


on preview: wow, langtonsant, that is the best rephrasing of the problem I've seen that explains why the answer IS 1/3->heads.

And the answer is 1/3. You can philosophize all you want: if you're sleeping beauty and you're playing for money and you're a halfer you're going to lose all your money.
posted by lastobelus at 12:15 PM on November 2, 2015 [1 favorite]


And, it's BECAUSE the coin toss is fair, not because we're doing some suspicious end run around it.
posted by lastobelus at 12:15 PM on November 2, 2015


What completely mystifies me is the position that the frequentist position is incorrect in extrapolating over many trials -- acknowledging that the e.v. over many trials is 1/3 Heads but that Sleeping Beauty should insist on 0.5 in any given trial for some reason involving some sort of philosophical purity.

This is basically a rejection of one of the main rules by which all of mathematics is derived from a set of starting axioms.
posted by lastobelus at 12:20 PM on November 2, 2015


If her credence on Sunday was 1/2, and her credence on Wednesday is 1/2, then her credence on Monday and Tuesday is still 1/2.

False. If interviewed on Monday or Tuesday, she knows that she's not in state (H, Tue), but she doesn't know she's not in state (T, Tue). On Wednesday she still knows that, but she also knows she's not in state (T, Tue) either, so it's uninformative.
posted by PMdixon at 12:41 PM on November 2, 2015


And the answer is 1/3. You can philosophize all you want: if you're sleeping beauty and you're playing for money and you're a halfer you're going to lose all your money.

That's exactly how I worked out my answer, earlier in the thread. Like I said previously, its even more intuitive by considering not waking twice in a row but ten times in a row. Break the superficial symmetries and it's easy to see the conditional probability problem that it can be most simply interpreted as.
posted by polymodus at 1:08 PM on November 2, 2015 [1 favorite]


Actually, what I really like about the SCS reframing above is that it also highlights the fact that the SB problem isn't really a Bayesian vs frequentist thing at all. From a Bayesian perspective I feel like SB should answer 1/3 if she parses the question as being about the outcome of the the last coin flip, but should be 1/2 if she parses it as a question about the statistical properties of the coin flip process. But I think a frequentist SB (or frequentist Bob) should give the same answer. In the SCS problem, the outcomes might look like this
    minute:                    1 2 3 4 5 6 7 8 9
    is the coin flipped?       y n y y y y n y n    
    what is the flip outcome?  T . H H H T . T .    - has limiting frequency 1/2
    what is the coin state?    T T H H H T T T T    - has limiting frequency 1/3
    is Bob queried?            y y y y y y y y y
There are two different sequences one of with limiting frequency 1/3 and the other with frequency 1/2. It is not unreasonable for a frequentist Bob to ask which of these sequences he is being asked to report on. But the same set of 6 coin flips could be used to run a sequence of 6 SB experiments across 6 weeks. It's uglier because the SB problem is (deliberately, I suspect) a less clean construction, but it ends up looking much the same:
    week:                     1   2   3   4   5   6
    day:                      M T M T M T M T M T M T
    is the coin flipped?      y n y n y n y n y n y n
    what is the flip outcome? T . H . H . H . T . T .  - has limiting frequency 1/2
    what is the coin state?   T T H H H H H H T T T T  - has limiting frequency 1/2 
    is SB awake for this?     y y y n y n y n y y y y
    coin states when awake    T T H . H . H . T T T T  - has limiting frequency 1/3
A frequentist SB notices that she again has multiple sequences with different limiting frequencies. If she believes she is being asked to report about the properties of the coin itself then she might reasonably parse the question as "what is the limiting frequency of the sequence of flips" and correctly answer 1/2. But if she thinks she is being asked to report on the coin states at those times she is queried, then she could report the limiting frequency of the bottom sequence and correctly answer 1/3.

Hm. Okay, now I'm just irritated. No matter how I approach the problem (Bayesian or frequentist) I feel like it's fundamentally an ambiguous question. There is one parsing that always gives a correct answer of 1/3 and another parsing that always gives a correct answer of 1/2. Neither version is wrong, they are both the correct answer to two different questions, and the SB problem goes out of its way to avoid being clear about which one is being asked. Yep, the right answer is still "fuck you".
posted by langtonsant at 2:25 PM on November 2, 2015 [4 favorites]


Bob knows that information is leaking to him about the coin state due to a biased experimental design and so if he parses Alice's questions as "please tell me the current state of the coin" he is justified in answering "it's probably tails (2/3 chance)"

Actually, on the very first query, he's just as well off answering heads as tails. For the second query, there's a 3/4 chance that the result is tails (half carried over from the first flip, and half of what's left if the first flip came up heads). The probability of the coin being tails on any given trial oscillates around 2/3, converging rather quickly. But if you erase Bob's memory between queries, the best answer is always strictly less than 2/3, approaching 2/3 monotonically as the number of queries approaches infinity (quickly at first, then quite slowly). In the case of 2 queries (most analogous to the SB problem, but critically different in the that there may be either two coin flips or one) the probability of the most recent flip being tails is 0.625.
posted by WCWedin at 2:36 PM on November 2, 2015 [1 favorite]


The ambiguity can be described simply.

In a given Trial (that is, the full process of flipping a coin, awakening SB once or twice, and interviewing her), an answer of "Tails" has a 1/2 chance of being correct.

In a given Awakening (that is, the single act of waking Sleeping Beauty up and interviewing her), an answer of "Heads" has only a 1/3 chance of being correct.

The probability, and therefore the justified credence, of Heads remains 1/2 for each Trial. There is absolutely a 50/50 chance of each Trial having Heads or Tails. But Heads is a bad betting option, because of the skewed number of Interviews.
posted by graymouser at 2:47 PM on November 2, 2015 [4 favorites]


Sleeping Beauty Solutions

1. On waking, SB can assume that it is either Monday or Tuesday, but has no way to tell which day it is. If it is Monday (50% probability), then the coin could have landed either heads (50% probability) or tails (50% probability). If it is Tuesday (50% probability), the coin must have have landed tails, as she is awake (100% probability). Hence the probability that the coin landed heads is 50% x 50% = 25%.

2. As everyone agrees, the SB problem is exactly the same if SB were simply asked "heads or tails?" on each awakening and given $1 for each "correct" answer. After 100 experiments, if she always chose "heads", she would have $50. If she always chose "tails", she would be given $50 (for "Monday Awakenings") and $50 (for "Tuesday Awakenings"). However, as the Tuesday payments happen one day in the future, we must apply an appropriate discount factor to obtain the present value of this $50. Different discount factors could give a present value anywhere between $1 and $49, but as SB does not know which day it is she cannot check current interest rates or other data to choose an appropriate value. Acting rationally, she should therefore chose a median discount rate, giving a present value of $25. Thus the total gain from choosing "Tails" is $75, and the gain from choosing Heads is $50. The payout ratio Heads:Tails is 50:75 which equals a probability of heads of 50/(50+75) = 40%.

3. The SB problem is very complex, with both "halfers" and "thirders" staunchly defending their positions. Obviously SB - who is naught but a poor science experiment subject - is not qualified to answer, but should simply give the average answer that experts have worked out for her. Some say the answer is "one-half", some say "one-third", the average of these answers is five-twelfths = 41.666...%.

4. Combining (3) with (1) and (2), it is obvious that when we say that 'some say the answer is "one-half", some say "one-third"', we are only taking a sample of the whole population of answers, as opposed to all possible answers. The average, based on a sample, should be calculated as (sum of observed results)/(no. of observations minus one), i.e. (0.5+0.333...)/(2-1), which equals five-sixths = 83.333...%.

5. As the coin toss has already taken place, the result must have been heads OR tails, e.g. the probability of heads is either 1 or 0. SB's original credence in the probability of heads before the coin toss was 0.5, which can be rounded to 1 (to one significant figure). This result ("1") of the pre-toss estimation is most consistent with the choice of "1" from "1 or 0" (of the post-toss analysis). Hence the only consistent answer is that the probability of heads = 100%.

6. Paragraphs 1 to 5 of this list give answers (respectively) of 25%, 40%, 41.666...%, 83.333...%, and 100%, and the average difference between the answers of two succeeding paragraphs is 18.75%, ergo the answer supplied in this paragraph 6 = 118.75%.

7. SB should consider various candidates for her answer, each having a good reputation in the problem-solving sector. She is most likely to consider pi, e and phi, as each of these numbers has solid and valuable experience as components of successful equations, and each can provide up-to-date references. However, pi and e will probably consider themselves too "big" for this opportunity, as the best fit for the role would be a number less than one. phi (1+sqrt5/2) is the closest fit to the specification and SB should therefore employ phi for her purposes, giving a credence of 1.618033 (to 6 decimal places) = approximately 161.8033%.

8. Looking closely at the question, we see that it states that when SB awakes "[she] will not be able to tell which day it is." As such, she cannot rationally have any belief in what day it is. Ergo, from SB's perspective, her credence that it is Monday must be 0%, and her credence that it is Tuesday must be 0%, and so Heads and Tails both also have probability 0%. To cancel out the fact that the result of the fair coin toss is heads with a probability of 50%, the probability of heads must, from SB's perspective = -50%.

9. Once the coin is flipped, only ONE of heads or tails will result. However, either result would invalidate the condition that the coin is "fair", i.e. producing the same number of heads and tails results after any number of trials. Therefore the coin must have landed neither heads nor tails, but rather on its edge. As there is no heads or tails result, SB cannot have been awoken, and must be dreaming, i.e. the question she is being asked is "imaginary". Therefore, her answer must be an imaginary number = i %.

10. Hey quidnunc, you think you're so fucking "clever", don't you? Writing out all this bullshit when we are just trying to have a conversation in this thread about an interesting problem. "Ooh, look at me, I'm talking nonsense! Everyone pay attention to me!!!" - that is basically all you are doing, you worthless, tedious piece of fucking shit. Really, it seems that the REAL question SB is being asked is: what is your credence that there are any brains in quidnunc's fat, empty, ugly fucking head? And the answer to THAT is 0%.
posted by the quidnunc kid at 2:34 AM on November 4, 2015 [6 favorites]


11. SB should post it to the Green. Opinion will likely be divided: "thirders" will recommend that she DTMF experimenter, "halfers" will explain that she can eat the experimenter only if it is Monday and the coin has not been left on the counter overnight.
posted by langtonsant at 2:58 AM on November 4, 2015 [3 favorites]


Weirdly this thread came up about 2 hours after I started writing a story involving a fairytale Sleeping Beauty type situation and I'm now desperately trying to think of a way to cram this in as some sort of maths metaphor.
posted by howfar at 3:42 AM on November 4, 2015 [1 favorite]


vote #1/3 quidnunc kid
posted by PMdixon at 10:06 AM on November 4, 2015 [2 favorites]


« Older What kind of a question is that?   |   CGAP Photo Contest Newer »


This thread has been archived and is closed to new comments