Why Smart People Are Stupid
June 12, 2012 12:40 PM   Subscribe

Why Smart People Are Stupid (The New Yorker.) A new study suggests that the smarter people are, the more susceptible they are to cognitive bias.
posted by naju (171 comments total) 43 users marked this as a favorite
 
Ha - I knew that's what was going on!
posted by Greg_Ace at 12:44 PM on June 12, 2012 [12 favorites]


Well, of course. There are only so many attribute points to allocate and if you're pumping up Intelligence, Wisdom is often a dump stat.
posted by delfin at 12:45 PM on June 12, 2012 [77 favorites]


So if you are smart you are stupid and you are stupid if you are stupid. This is stupid.
posted by cjorgensen at 12:47 PM on June 12, 2012 [7 favorites]


West and his colleagues began by giving four hundred and eighty-two undergraduates a questionnaire featuring a variety of classic bias problems. Here’s a example:
In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?
The answer is "an example."
posted by It's Raining Florence Henderson at 12:47 PM on June 12, 2012 [64 favorites]


I just knew he was going to start off with Kahneman. Who else could it have been?
posted by maudlin at 12:48 PM on June 12, 2012


This is a bad article, because I could answer the trick questions. QED.
posted by Blazecock Pileon at 12:48 PM on June 12, 2012 [5 favorites]


I'm not too clear on why this is an obviously dumb article or study - feel free to flag, but some serious explanation might be cool.
posted by naju at 12:55 PM on June 12, 2012 [1 favorite]


I agree with this absolutely. People who think they're self-aware and logical doubt themselves less, even if the "logic" only makes sense within their own subjective framework.
posted by wolfdreams01 at 12:55 PM on June 12, 2012 [7 favorites]


Well, it is a new study after all.
posted by michaelh at 12:56 PM on June 12, 2012


This thread is like watching the entire premise of the article in action.

High cognitive intelligence is no guarantee of either rational or wise behavior.
posted by zarq at 12:58 PM on June 12, 2012 [25 favorites]


In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

Which half?

The answer is either 1 or 47.
posted by VTX at 12:59 PM on June 12, 2012 [52 favorites]


ἓν οἶδα ὅτι οὐδὲν οἶδα
-Socrates
posted by the painkiller at 1:00 PM on June 12, 2012 [4 favorites]


In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

Your first response is probably to take a shortcut, and to divide the final answer by half. That leads you to twenty-four days. But that’s wrong. The correct solution is forty-seven days.


This is wrong, of course. There's not enough data here to correctly compute a numerical answer while accounting for overshoot on the last day unless the size of the lily patch on the first day is specified in the initial conditions. We know the lily patch already exists, and is therefore not of size "0", on the first day. Without knowing the initial (proportional) size of the lily patch, we cannot know if the lake is is filled on the 47th, or if on the 47th day the lake is just below the threshold for full coverage, and thus it "doubles" in size on the 48th day, but is in fact constrained by the boundaries of the lake on that day, thus filling the lake.

Bias indeed.
posted by atbash at 1:00 PM on June 12, 2012 [40 favorites]


atbash, even allowing for that, the answer is still bounded between 46 and 47 days. The point of the exercise is that it's definitely not 24.
posted by en forme de poire at 1:03 PM on June 12, 2012 [7 favorites]


ἓν οἶδα ὅτι οὐδὲν οἶδα
-Socrates


I thought that was Ted Theodore Logan
posted by MtDewd at 1:03 PM on June 12, 2012 [8 favorites]


Sure, en forme de poire, but the point is that the question to determine bias isn't very well thought out. It's written by somebody who isn't as good at logical thinking as they seem to be, and it's being used as a way to be smug about other people.
posted by atbash at 1:05 PM on June 12, 2012 [1 favorite]


As it turns out, stupid people are stupid, too.
posted by The Bellman at 1:06 PM on June 12, 2012 [2 favorites]


This thread is like watching the entire premise of the article in action.

High cognitive intelligence is no guarantee of either rational or wise behavior.


well, as the Church of the Subgenius pointed out long ago, don't be smart, be a smartass.
posted by philip-random at 1:08 PM on June 12, 2012 [5 favorites]


This is wrong, of course. There's not enough data here to correctly compute a numerical answer while accounting for overshoot on the last day...

I'll give you the point about the initial conditions, but if the area change in the pads is sufficiently continuous and by 48 days they really mean 48 days and not "sometime on the 48th day", there's no overshoot.
posted by Philosopher Dirtbike at 1:10 PM on June 12, 2012 [3 favorites]


From the Abstract of the journal article: "Further, we found that none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability."

From the use of "if anything" weasel words, I reckon the headline is "Why Smart People Are Stupid" is founded on shitty data. Anyone have a link to the full article?
posted by exogenous at 1:13 PM on June 12, 2012


In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

Your first response is probably to take a shortcut, and to divide the final answer by half. That leads you to twenty-four days. But that’s wrong. The correct solution is forty-seven days.

This is wrong, of course. There's not enough data here to correctly compute a numerical answer while accounting for overshoot on the last day unless the size of the lily patch on the first day is specified in the initial conditions. We know the lily patch already exists, and is therefore not of size "0", on the first day. Without knowing the initial (proportional) size of the lily patch, we cannot know if the lake is is filled on the 47th, or if on the 47th day the lake is just below the threshold for full coverage, and thus it "doubles" in size on the 48th day, but is in fact constrained by the boundaries of the lake on that day, thus filling the lake.

Bias indeed.


The answer is given in the problem, the question is just testing people's ability to notice that. You're interpreting a question asked in bad faith whose purpose is to neatly divide (ha) people into exactly two categories for a question about rates of change. There isn't actually any lake. This is really about helping insecure smart people feel confident that they aren't one of those dreadful stupid smart people, who aren't even aware of how dumb they are. In other words, it's perfect for the New Yorker, and perhaps only deficient in how blatant it is.
posted by clockzero at 1:19 PM on June 12, 2012 [19 favorites]


Everybody's stupid in their own way.
posted by The Card Cheat at 1:20 PM on June 12, 2012 [5 favorites]


It's an adorable kind of status anxiety. "Oh, Jesus, smart people can be stupid too now? Which kind am I?"
posted by clockzero at 1:20 PM on June 12, 2012 [5 favorites]


I know I'm biased in ways I probably don't even suspect. I'm not sure my backpack qualifies me for stupidity, however, unless I deny it's there.
posted by Mooski at 1:21 PM on June 12, 2012


atbash: It's written by somebody who isn't as good at logical thinking as they seem to be would like to think they are

FTFY.
posted by Greg_Ace at 1:21 PM on June 12, 2012




I am comfortable being stupid since there is ample evidence that there are at least dozens of people more stupid than I.
posted by Joey Michaels at 1:22 PM on June 12, 2012


I also don't understand this is a stupid study. What is stupid is the headline, which equates "stupidity" with "susceptibility to cognitive biases" for the sake of a catchy title.
posted by hot soup at 1:24 PM on June 12, 2012 [2 favorites]


This is wrong, of course. There's not enough data here to correctly compute a numerical answer while accounting for overshoot on the last day unless the size of the lily patch on the first day is specified in the initial conditions. We know the lily patch already exists, and is therefore not of size "0", on the first day. Without knowing the initial (proportional) size of the lily patch, we cannot know if the lake is is filled on the 47th, or if on the 47th day the lake is just below the threshold for full coverage, and thus it "doubles" in size on the 48th day, but is in fact constrained by the boundaries of the lake on that day, thus filling the lake.

This is over-thinking a lake of lily pads. Sufficient data is given. The rate of growth and the fact that the lake becomes fully covered after exactly 48 days. Given those conditions as true, the lake will be exactly half-covered after exactly 47 days.
posted by rocket88 at 1:25 PM on June 12, 2012 [6 favorites]


it's being used as a way to be smug about other people

Well, maybe some people have used it that way - but I don't think the original study has anything to do with feeling smug. If anything the point is to show how easy it is to be fooled.

Incidentally, I thought this was interesting right up until the conclusion, where it sort of sputters out without saying anything new. Also, one thing I wonder about is the effect of reinforcement: people who are high scholastic achievers have often been told that they're smart over and over again, through evaluations or SAT scores or whatever, which seems like it could lead to overconfidence when performing a misleading mental task.
posted by en forme de poire at 1:27 PM on June 12, 2012 [2 favorites]


Good short article. Best part was the last two paragraphs.
posted by polymodus at 1:28 PM on June 12, 2012


atbash: It's written by somebody who isn't as good at logical thinking as they seem to be would like to think they are

FTFY.


Thanks, that's exactly what I meant to type.
posted by atbash at 1:28 PM on June 12, 2012 [1 favorite]


In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake?

Which half?

The answer is either 1 or 47.


Ding! Ding! Ding!

And if you were to find how long it takes, on average, to cover one half of the lake...?

Careless shortcut FTW.
posted by Sys Rq at 1:29 PM on June 12, 2012 [6 favorites]


I think this explains why the putative scientists in Prometheus acted like such idiots.
posted by Cash4Lead at 1:31 PM on June 12, 2012


The answer is given in the problem, the question is just testing people's ability to notice that. You're interpreting a question asked in bad faith whose purpose is to neatly divide (ha) people into exactly two categories for a question about rates of change.

...Which means that ultimately the only thing they're testing is reading comprehension, and an eye for ambiguously worded trick questions. I spend a lot of my time reviewing spec documents and SOWs for our clients, trying to suss out potential pitfalls; I spotted the gotchas in all of the questions they asked in the article, but that has nothing to do with being "smart." It's about being trained to look for gotchas, rather than assuming that the speaker is making a good-faith attempt to communicate clearly and unambiguously.
posted by verb at 1:31 PM on June 12, 2012 [13 favorites]


the lake becomes fully covered after exactly 48 days

The question doesn't say that though so I think this is a bit of an assumption. I made the same assumption. The wording of the question reminds me of a word problem from a grade-school math text book and those usually looking for whole-number answers so I think it's a pretty safe assumption but Atbash has the more correct answer.
posted by VTX at 1:33 PM on June 12, 2012


It's Raining Florence Henderson, once again, proves to be my favorite MeFite!
posted by computech_apolloniajames at 1:33 PM on June 12, 2012


I love problems like the lily pond problem and the bat and ball problem — any idea where I can find more?

I suspect problem solving is often a case of pattern matching, which opens you up to error when a pattern is a close but not exact match. If you do a lot of problem solving, and you encounter a new problem, you’re likely to see that problem in light of all the problems you’ve already solved, which may increase your confidence in your answer inappropriately.

It reminds me a bit of the old question about borrowing money to buy something: Our hero borrows $50 from two friends. $50 + $50 = $100. Our hero buys the item for $97, returns $1 to each of the friends, and keeps $1. Now each friend is owed $49. $49 + $49 + $1 = $99… where did the other $1 go?

The trick is that we've established a narrative that the amounts mentioned should add up when in fact it’s just nonsense. We establish a pattern and then have trouble because the pattern is wrong.
posted by danielparks at 1:33 PM on June 12, 2012 [5 favorites]


roger ackroyd: "Also smart people often forget to wear panties and fall through the ceiling."

That thread is quite relevant to this FPP. The OP takes shortcuts, and doesn't stop to think before she (literally) leaps. An answer mentions that "common sense is not innate."
posted by zarq at 1:33 PM on June 12, 2012


danielparks: "We establish a pattern and then have trouble because the pattern is wrong."

Playing a record? I'll show you something interesting.
posted by zarq at 1:35 PM on June 12, 2012 [6 favorites]


It's about being trained to look for gotchas, rather than assuming that the speaker is making a good-faith attempt to communicate clearly and unambiguously.

Right, exactly. The article's purpose is to make people feel better about themselves by conflating critical thinking with receptivity to having-it-pointed-out-for-one which situations require critical thinking via presentation in an article about cognitive bias in a high-minded magazine.
posted by clockzero at 1:36 PM on June 12, 2012 [2 favorites]


When I read the lily pad question, my first thought was that it was stupid because the input values make no sense. Assume the lily pad patch on day 1 is as small as it could be and still be there; let's say 1 square foot. So on day 2 it's 2 s.f., and on day n it's 2^(d-1) sf, where d is the day. So on day 48 it's 2^47 sf, which is 140,737,488,355,328 sf. 27,878,400 sf equal one square mile, so that's about 5,048,263 square miles. If the lake is square (for simplicity) that's a lake that's 2,247 miles x 2,247 miles. Where the hell is there a lake that size?
posted by freecellwizard at 1:36 PM on June 12, 2012 [18 favorites]


Patch of lily pads = plural number of lily pads. I'm going to say two don't make a patch. Let's make a conservative estimate of 4. Four doubling every day until it fills the lake on the 48th day equals 2 to the 49th power, or 571,797,720,399,872 lily pads. 571.8 trillion pads. I don't believe this lake exists.
posted by dances_with_sneetches at 1:37 PM on June 12, 2012 [10 favorites]


Metafilter : ... we've established a narrative .... when in fact it’s just nonsense.

(Actually, I think it applies better to The New Yorker.)
posted by benito.strauss at 1:37 PM on June 12, 2012


Jesus Christ. Daniel Kahneman is a well-respected psychologist who has done a lot of good research on cognitive biases, much of which is detailed in Thinking, Fast and Slow. The basic thesis of the book is that we have two modes of thinking; one is quick, because it (to put it glibly) operates as a lookup table, which is the cause of a lot of biases because we tend to be overly enthusiastic in assuming that a new situation matches an old one that we already have an answer for; and the other is pretty much good old rational thought. Another source of bias is that we think we're engaging in Mode 2 thinking when we're actually engaging in Mode 1 thinking, speaking of which, do you folks have to be such turds about these things?
posted by invitapriore at 1:39 PM on June 12, 2012 [31 favorites]


This thread is like watching the entire premise of the article in action.

The entire site and life itself is like that. Intelligence isn't one particular trait, though we like to think it is. Nor does it exist in a vacuum by itself.

So yeah, incredibly smart people can easily do incredibly stupid things.
posted by Brandon Blatcher at 1:39 PM on June 12, 2012 [1 favorite]


So if you are smart you are stupid and you are stupid if you are stupid. This is stupid.

I totally believe it, but unfortunately I'm stupid, so take that with a grain of salt.
posted by kittens for breakfast at 1:39 PM on June 12, 2012


I have no trouble believing this. In fact, if you go through my past posts you can find me ranting about people I used to work with who felt that my diluting to a precise concentration was humanly impossible (despite their being four numbers on the dial of the pipette) and generally dismissing any answer obtained with math (rather than empirically) to be "just theoretical".

That said, I don't think it has to do with being smart or educated. It's about being so sure of yourself that you never stop to look at the details of what you're doing.
posted by Kid Charlemagne at 1:39 PM on June 12, 2012 [1 favorite]


Incidentally, I thought this was interesting right up until the conclusion, where it sort of sputters out without saying anything new.

Good short article. Best part was the last two paragraphs.


Hahaha, to each their own I guess.

Clockzero/verb, if reinforcement is really part of this I actually wonder if this article will make people less aware, vs. more aware, of their own cognitive biases. You read the questions in an article where you're primed to not look for the "easy" solution, you're more likely to get them right, and then you go, "hey, I don't suffer from cognitive biases like those other people" and go about the rest of your day.
posted by en forme de poire at 1:40 PM on June 12, 2012


I just readily accept that math and science word problems are a curious creature that necessarily contains all sorts of implicit assumptions to remove the messiness. How the hell did you guys make it through K-12 education?
posted by naju at 1:41 PM on June 12, 2012 [2 favorites]


A shocking, shocking discovery: smart people can be wrong. Yes! It's true!
posted by Malor at 1:42 PM on June 12, 2012


3 lilly pads can easily be described as a patch.
posted by found missing at 1:42 PM on June 12, 2012


My wife:
Only dumb people read that magazine
Who makes more money,the dumb or the smart?
posted by Postroad at 1:47 PM on June 12, 2012


Gripping new research separates all humans into idiots, idiots, and insufferable pedants.
posted by nanojath at 1:49 PM on June 12, 2012 [12 favorites]


If the lake is square (for simplicity) that's a lake that's 2,247 miles x 2,247 miles. Where the hell is there a lake that size?

Titan?
posted by atbash at 1:49 PM on June 12, 2012


From The Bellman's Dunning-Kruger link

"Actual competence may weaken self-confidence, as competent individuals may falsely assume that others have an equivalent understanding. As Kruger and Dunning conclude, "the miscalibration of the incompetent stems from an error about the self, whereas the miscalibration of the highly competent stems from an error about others""

"The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt."

-Bertrand Russell
posted by marienbad at 1:49 PM on June 12, 2012 [10 favorites]


Clockzero/verb, if reinforcement is really part of this I actually wonder if this article will make people less aware, vs. more aware, of their own cognitive biases. You read the questions in an article where you're primed to not look for the "easy" solution, you're more likely to get them right, and then you go, "hey, I don't suffer from cognitive biases like those other people" and go about the rest of your day.

Cognitive biases aren't something that certain people suffer from. They're defaults -- like the blind spot in the human eye. No one is magically immune to them, regardless of intelligence. Some people have been trained (either explicitly, or by brutal experience) to look carefully for evidence of those biases in their own reasoning, but that's a learned skill rather than something inherent or directly related to intelligence or "smartness."

It's also a skill that takes work to engage, so most people don't bother working through that level of rigor for unimportant matters. That supports the idea that the writer seems to be getting across in most of his published work, and even what he seems to be communicating in the text of the article once he gets past the smuggy framing.

The problem I have is that what he's talking about is a human trait that people -- smart or dumb -- share. Not "stupidity." The framing of the article meshes perfectly with the modern cult of ignorance, whether the writer intended it to or not.
posted by verb at 1:53 PM on June 12, 2012 [10 favorites]


How the hell did you guys make it through K-12 education?

With much hostility towards having my time wasted by dumb "educators". It's served me well.
posted by atbash at 1:53 PM on June 12, 2012 [4 favorites]


Smart people also have trouble answering puzzles that are supposed to be tricky. This is not surprising. I can write academic articles, but I can't solve simple mazes as quickly as my intellectually disabled niece. She also whups me at Dino-opoly.
posted by jb at 1:59 PM on June 12, 2012


A new study suggests that the smarter people are, the more susceptible they are to cognitive bias.

I call this phenomena "Metafilter".
posted by Artw at 1:59 PM on June 12, 2012 [7 favorites]


The fundamental problem with the Lily pad problem is that it is completely at odds with how reality works. If you were really doing this as an experiment, you'd find that the growth of the lily pad cluster only happened around the edges, not in the middle (because that area is already covered). So, if you have a clue as to how reality works, you're screwed.

I'm pretty sure I could come up with a test that most people would do abysmally at by setting up all kinds of screwy premises and then asking people to tell me that Bob needs to drink at least three gallons of Pepsi Blue a day beyond his current consumption to loose 50 lbs. by the end of the summer.

Their other example, about the size of trees, is more about dealing with a world where your ability to pass standardized tests is more important than your having a clue. Reap the whirlwind says I.
posted by Kid Charlemagne at 2:06 PM on June 12, 2012 [2 favorites]


There are an infinite number of things to know, so the finite number of things we do know, in comparison, is practically nothing.
posted by JHarris at 2:07 PM on June 12, 2012 [2 favorites]


verb, maybe I wasn't clear - I don't think we're really disagreeing. I understand that cognitive biases are universal. What I was trying to say is more that because of the framing of this particular article, it's a lot more likely that the reader will answer the questions correctly because the article prepares them to reject the "easy/intuitive" solution. So it seems like the examples could have the opposite of the intended effect: rather than demonstrate the need for rigor and self-doubt, they might make people feel baselessly confident in their ability to circumvent cognitive biases. (I agree with your last point, also.)
posted by en forme de poire at 2:08 PM on June 12, 2012


Some people have been trained (either explicitly, or by brutal experience) to look carefully for evidence of those biases in their own reasoning, but that's a learned skill rather than something inherent or directly related to intelligence or "smartness."

Dunno, it's not clear that one can be reliably trained, either, really --- Kanehman himself says that 40 years of studying this stuff hasn't made him any less suceptible to such biases. I think it may be possible to be trained to avoid a bias in a particular set of circumstances, but I don't think it would protect you in other applications of the same principle --- beginning drivers are taught to check their blind spots when changing lanes, but would exen an experienced driver think to do that if you put them on a Segway or a skateboard? I doubt it...

As a sidebar, I wish people could get over their nerdly headline rage. Headlines: they're meant to be glib, nuance-less, eye-catching recap of the gist of the article. You will find, pretty much every single time, that the actual article's point is subtler and more complex than the headline. Half the time they're not even written by the author of the piece. I find it mystifying, how people who've been reading news for decades will still look at a piece like this and get in a snit over a glib headline as if they didn't understand these conventions.
posted by Diablevert at 2:08 PM on June 12, 2012 [5 favorites]


So if it's true that the smarter people are the more susceptible they are to cognitive bias, and it's true that the two questions are a fair measure of cognitive bias, the fact that I got both of them correct means that I'm not stupid, but am also not so smart that I experience inflated cognitive bias, right? I've got Goldilocks intelligence. Vive le médiocrité!
posted by zylocomotion at 2:10 PM on June 12, 2012


You don't need initial conditions to solve the lily pad problem. What happens on day 1 or day 0 is completely irrelevant.
The growth rate (doubles every day) defines the shape of a curve. The final condition (lake full after 48 days) defines one point on that curve. After that it's no longer about lily pads or lakes, it's a pure math problem.
A function doubles every day. The function is is X after 48 days. When is the function X/2?
posted by rocket88 at 2:15 PM on June 12, 2012


undergraduates\

I think I see a potential problem.

In my experience it doesn't matter how "smart" an undergraduate is, the special concoction of biology non-normative social settings make most undergraduates know-it-all smartasses. I was one, all my friends who are wise where ones as well.

I think a multi-generational study should be conducted before reaching conclusions as broad as "smart people are dumb"
posted by edgeways at 2:15 PM on June 12, 2012 [1 favorite]


So yeah, incredibly smart people can easily do incredibly stupid things.

Yep, and this site, and especially neighboring subsites, would be lots more boring if it wasn't true.
posted by MCMikeNamara at 2:16 PM on June 12, 2012


Speaking of lazy shortcuts, that first example the article given must've come out of the bumf for Daniel Kahneman's book, as the exact same one was used on a Radio 4 documentary just the other week.
posted by MartinWisse at 2:16 PM on June 12, 2012


Stanovich is the man.
posted by painquale at 2:22 PM on June 12, 2012


This thread is like watching the entire premise of the article in action.

You're making the presumption that people weren't riffing on the contents of the article and pretending to be unself-aware.
posted by cjorgensen at 2:22 PM on June 12, 2012 [1 favorite]


The basic thesis of the book is that we have two modes of thinking; one is quick, because it (to put it glibly) operates as a lookup table, which is the cause of a lot of biases because we tend to be overly enthusiastic in assuming that a new situation matches an old one that we already have an answer for; and the other is pretty much good old rational thought

I don't think these questions really prove two different modes at all. The "good old rational thought" involves the same sorts of calculations, it just takes the extra step of figuring out what the question is actually talking about. All of the questions listed have tricks of some sort that try to force the answerer to understand the question as asking something that it doesn't really ask. For the lily pad one, the incorrect assumption it's trying to force is that the question is asking when the lily pads will be halfway finished in terms of time rather than the amount of time needed to cover a particular area of the lake. So the person goes "Oh, half the time, I just take the total time and divide it by two" instead of "Oh, the time for the last half, I just take the doubling time and subtract it from the full lake area coverage time". The only part that involves non-shortcut logic is the part where the person unravels the question itself, if the questions were framed in less ambiguous ways that clearly presented what the actual problem was then the shortcuts would start working again.
posted by burnmp3s at 2:30 PM on June 12, 2012 [1 favorite]


I think what hasn't been raised in this thread yet regarding the lily pad example is the amount of ideological context that is left out of the question. It isn't a trick question as some people have supposed; rather, it is really about how much you already understand about these stylized word problems. The more formal education you've acquired, the more likely you are to intuit the parts that are underspecified, for instance the assumptions about how mathematical formulas relate to the real-life objects being alluded to. After all, who the fuck actually cares about lily pads? The answer is straightforward only because you have the training to understand the language of it.

I was really good at taking SATs, not because I was particularly fast or brilliant, and in no small part because I knew precisely the kind of thinking the questions looked for.
posted by polymodus at 2:37 PM on June 12, 2012 [2 favorites]


The article has the ball and bat question and the lily pad question, but it's missing the third question (yes, usually these questions appear together... Google it!)
If it takes five machines 5 minutes to make five widgets, how long would it take 100 machines to make 100 widgets?
posted by twoleftfeet at 2:39 PM on June 12, 2012 [2 favorites]


The comments here are exactly what I expected, just from glancing at the title.
posted by Evernix at 2:41 PM on June 12, 2012 [1 favorite]


"42"
posted by Greg_Ace at 2:42 PM on June 12, 2012 [4 favorites]


cjorgensen: "You're making the presumption that people weren't riffing on the contents of the article and pretending to be unself-aware."

Looking back at the thread, it seems far more likely that people are being defensive. :)
posted by zarq at 2:42 PM on June 12, 2012


(dammit, I meant to reference twoleftfeet's question)
posted by Greg_Ace at 2:43 PM on June 12, 2012 [1 favorite]


I don't think these questions really prove two different modes at all.

There's a good argument to be had on that point, but it's worth noting that those questions were posed to subjects in a study led by Richard West that was published (and likely conducted) after Kahneman's book was published. Apart from the similar focus, those examples are unrelated to anything Kahneman has done, as far as I know.
posted by invitapriore at 2:43 PM on June 12, 2012


Looking back at the thread, it seems far more likely that people are being defensive. :)

Alternately, a bit frustrated. The writer's book definitely deals with an important subject, but the framing of the article and a lot of the promotional chatter surrounding it all but ensures that a completely different message will be received by many listeners.

If this MeFi post had used the title "Dumb people are better than smart people," it would be a misrepresentation of the article and its contents -- just as the "Smart people are dumb" tagline is a misrepresentation.
posted by verb at 2:45 PM on June 12, 2012 [2 favorites]


Greg_Ace, 42 is indeed the answer, just not the answer to my question.
posted by twoleftfeet at 2:46 PM on June 12, 2012


Oh for fuck's sake, fuck the fucking lily pads! What the hell is with this weird, "if-refute-the-premise-of-the-lillypad-question,Lehrer-is-destroyed!" theme in this thread. So the lily pad thing obeys the conventions of word problems and is not a perfect analogue of real-world bio systems. It's but one example out of dozens tested of one particular type of cognitive blind spot. If the lily pad problem displeases y'all, then fuck it. The same pattern was seen with the redwood-type questions, which involved a different type of bias and absolutely zero algebra --- just making an estimate. And the pattern, that how smart people were in general didn't reduce the chance that they'd display the bias, held. we already know the biases exist and that people have them, that what K&Co.'s first 40 years of research were about. This is about how being smart offers no protection from bias.
posted by Diablevert at 2:48 PM on June 12, 2012 [15 favorites]


You can always tell a Harvard man, but you can't tell him much.
posted by msalt at 2:50 PM on June 12, 2012 [5 favorites]


Speaking of lazy shortcuts, that first example the article given must've come out of the bumf for Daniel Kahneman's book, as the exact same one was used on a Radio 4 documentary just the other week.

I think they used it because it was one of the questions actually used in the study (pdf here, may be paywalled though).
posted by en forme de poire at 2:52 PM on June 12, 2012


By the way, twoleftfeet, I just barely managed to avoid outwitting and second-guessing and double-whammying myself and figured out the correct answer before I looked it up. But as polymodus pointed out upthread, only because I knew what sort of question it was....
posted by Greg_Ace at 2:54 PM on June 12, 2012


verb: " Alternately, a bit frustrated. The writer's book definitely deals with an important subject, but the framing of the article and a lot of the promotional chatter surrounding it all but ensures that a completely different message will be received by many listeners."

Fair enough.
posted by zarq at 2:58 PM on June 12, 2012


I find it unhelpful that the researcher correlates "ability to casually solve word problems" and "SAT scores" with "intelligence."

Isn't the real takeaway here that people who have high levels of confidence are wrong more often than they realize? His finding seems to be that "people who score well on the SAT" are overconfident about their ability to solve word problems.

I do not consider this surprising research. Mostly it serves as an argument for discounting the SAT as an indicator of "intelligence" full-stop. I would be interested to see how his experiments went with a group that had never taken the SAT (or a similar test) yet had the basic mathematical proficiency needed to solve the problems. If one is never told, "your society prizes your ability to solve these problems quickly and correctly in a timed situation" would that change the approach of the subjects? When I saw the lily-pad question my brain went into panic mode because when do you ever have a contrived scenario like that except when someone is testing you in a way that is rudimentary but cost-effective?

The overall framing of the article is snarky, and doesn't seem like a good-faith way to examine the problem of how people address their own bias.
posted by newg at 3:10 PM on June 12, 2012 [2 favorites]


The final conclusion of the article also appears to be, "You're fucked." Self-analysis is ultimately not very useful in overcoming the effects of cognitive bias, which is kind of an unsurprising revelation for anyone who's dabbled in postmodernism. What the article seems to overlook is that the ease with which we spot cognitive biases in others provides a potential way out.

A group of people willing to present conflicting opinions and alternative ideas, a deliberate willingness to listen to the perspectives others, and a conscious embrace of outside criticism for the purpose of learning... those things would seem to be useful tools, based on what the article says.
posted by verb at 3:30 PM on June 12, 2012 [2 favorites]


This is about how being smart offers no protection from bias.

I guess I disagree that people need protection from this kind of bias. It makes sense for people to read a trick question in the normal way that trick questions are designed to illicit. When someone does the "Ilk, silk, bilk, quick, what do cows drink?" thing and the person says milk instead of the "correct" answer of water, that's taking advantage of a positive aspect of associative memory in that it can quickly find a word that matches the rhyming pattern, has to do with cows, and is drinkable. In the estimation question, it's smart to take some concrete information that a trusted source gives you about a problem to help you solve the problem. The researchers could have lied and said "The second tallest redwood tree is 1052 feet tall, estimate how tall the tallest one is" and that would show the same bias, but it would look less like a fault of the person answering the question because they were unambiguously fed false information about the question. These are normal parts of human cognition that are being framed as wrong or biased when in fact the source of the bias is coming from the people purposely designing ways to produce a biased result. It would be similar to setting up a door that has a big sign that says "Push" on it even though you have to pull on it to open it, and claiming that anyone who tries to push on it a few times even though it won't work is suffering from a cognitive blind spot or isn't protected from an important type of bias.
posted by burnmp3s at 3:34 PM on June 12, 2012 [1 favorite]


Most brain power is spent on self-deception and finding arguments in favor of preconceived notions.
posted by pterygota at 3:39 PM on June 12, 2012 [1 favorite]


Which half? The answer is either 1 or 47.

If you're being pedantic about which half, isn't the answer either 47 or 48?
posted by one_bean at 3:42 PM on June 12, 2012 [3 favorites]


Most brain power is spent on self-deception and finding arguments in favor of preconceived notions.

I was going to accept your premise, but my years of experience suggest you're wrong.
posted by maxwelton at 3:43 PM on June 12, 2012


Cognitive biases can be a good thing. Often "smart" people (the ones who do well on the SATs, or whatever) are quick at working mental models, and models involve underlying assumptions, which we may as well call the "bias" of the model. In a situation which has been constructed to violate the assumptions of the model, the bias will give a wrong answer, but the model might still work in an overwhelming number of situations, so it's a good thing.

I'm thinking of the way that conjurers hate performing for small children because they are harder to misdirect than adults. I place a coin in your hand and ask you to close your fist around it tightly. I demonstrate this with you a couple times, "just to make sure you understand", and then the third time you close your fist I just press hard on your palm. When you open your hand, the coin is gone and you are amazed. The child is not amazed, because two instances didn't trigger the expectation of a third similar instance. But most of the time the world doesn't work that way, and the adult knows it, and, in fact, relies on it. And that's a good thing, because the model works most of the time... unless you meet a magician or a cognitive psychologist.

On preview, what burnmp3s said.
posted by twoleftfeet at 3:47 PM on June 12, 2012 [3 favorites]


The trouble is, I actually am right all the time.
posted by Decani at 3:58 PM on June 12, 2012


This is all about communication, isn't it? You can communicate to deceive, or you can communicate in good faith. These questions are designed to deceive, like a con artist talking to a mark. If you rewrote the same problems with the intent of minimizing misunderstanding, I'm sure you'd be able to craft equivalent scenarios that saw most people immediately getting the right answer.

It's weird to see so much attention and focus placed on the victim of these questions, as it were, rather than the perpetrator, the party with real agency and control (statistically speaking). I appreciate that they're two sides of the same coin, but we sure as hell could do with a broader shared understanding of what it takes to communicate clearly, and a little less emphasizing that we're all idiots and there's nothing you can do about it. Since one thing you can do about it is communicate better.
posted by jsturgill at 4:01 PM on June 12, 2012 [2 favorites]


So the lily pad thing obeys the conventions of word problems and is not a perfect analogue of real-world bio systems. It's but one example out of dozens tested of one particular type of cognitive blind spot.

Right, but part of the blind spot is where we hold shit up to our real world experiences and say, "Shit, that can't be right!" At the gut level we're actually very good at calculus at doing calculus in our heads. (And when I say "we", I mean mammals.) So if the quick answer matches the empirically comfortable answer, then yeah, that's what we tend to give.
posted by Kid Charlemagne at 4:07 PM on June 12, 2012 [1 favorite]


This is all about communication, isn't it?

No, it is not. The point of the study was not "they got the answer wrong, therefore they are stupid." the point of the study was "we know already that many people get these kinds of questions wrong, because of the built in short cuts the brain takes when solving problems. Are smart people less likely to get them wrong? Are smart people's brains less likely to take those shortcuts?

That's what they're testing. The response of the vast majority of the thread has been, "well, that's not fair, those questions are designed to trip people up!" Christ-on-mother-lovin'-crutch, of course they are. In fact, those types of question have been tested so often that we can predict on average how likely people are to fuck 'em up. And draw statistically significant conclusions about how tests of general intelligence like the SATs correlate with the likelihood of their fucking them up!

The study did not set out to prove that cognitive biases exist. We know they exist. What remains to be determined is whether there's a reliable way to overcome them. One such way, we might think, is general intelligence --- that the smarter you are, the better you are at spotting gaps in logic and irrational conclusions and faulty reasoning. The answer --- as indicated by this particular research, which will obviously need to be confirmed and tested and expanded on --- is no. Not when it's our brain doing the thinking. Smart people suck just as bad as ordinary people at figuring out when they're being dumbasses.
posted by Diablevert at 4:21 PM on June 12, 2012 [12 favorites]


I'd like to see a whole bunch of these questions bundled together in a sort of reverse IQ test. "Well, Bob, I see you got 18 out of 20 questions wrong. You are a genius."
posted by twoleftfeet at 4:25 PM on June 12, 2012 [1 favorite]


I'd like to see more people actually try to understand what a cognitive bias is.
posted by Rocket Surgeon at 4:27 PM on June 12, 2012 [3 favorites]


I'd like to see the authors' unargued-for assertion that bias is the root of irrationality defended.
posted by thelonius at 4:29 PM on June 12, 2012


One such way, we might think, is general intelligence --- that the smarter you are, the better you are at spotting gaps in logic and irrational conclusions and faulty reasoning. The answer --- as indicated by this particular research, which will obviously need to be confirmed and tested and expanded on --- is no.

I don't think the study says anything about "spotting gaps in logic and irrational conclusions and faulty reasoning". A cognitive bias isn't the same thing as a logical fallacy. When you make a logical fallacy you have violated the laws of logic. Cognitive biases have more to do with "laws" of perception and expectation, and violations of those are shared by both cognition and the particular situation.
posted by twoleftfeet at 4:31 PM on June 12, 2012


Last sentence: The more we attempt to know ourselves, the less we actually understand.

...crap
posted by 3FLryan at 4:40 PM on June 12, 2012


The more we attempt to know ourselves, the less we actually understand.

I feel the same way about the cartoons in the New Yorker.
posted by twoleftfeet at 4:49 PM on June 12, 2012 [2 favorites]


Everybody's stupid in their own way.

Everybody's stupid, that's for sure.
posted by ovvl at 4:50 PM on June 12, 2012


> It's written by somebody who isn't as good at logical thinking as they seem to be, and it's being used as a way to be smug about other people.

For the benefit of you and all the other people who don't seem to have bothered to read past the examples, I'll reproduce a salient portion from farther on:
This finding wouldn’t surprise Kahneman, who admits in “Thinking, Fast and Slow” that his decades of groundbreaking research have failed to significantly improve his own mental performance. “My intuitive thinking is just as prone to overconfidence, extreme predictions, and the planning fallacy”—a tendency to underestimate how long it will take to complete a task—“as it was before I made a study of these issues,” he writes.
Nobody's being smug in the article. Just here. (Which, as others have pointed out, isn't a bit surprising.)
posted by languagehat at 4:53 PM on June 12, 2012 [4 favorites]


ἓν οἶδα ὅτι οὐδὲν οἶδα

"I drank what?"
posted by kirkaracha at 4:53 PM on June 12, 2012 [5 favorites]


What
posted by jonmc at 4:57 PM on June 12, 2012


Annoyingly, I can't access the preprint yet, since my institution doesn't use PsycNET to access APA journals. But I'm not sure that the New Yorker summary is getting everything right, at least judging by the abstract. From the abstract, there seem to be three key points:

(1) The "blind spot" -- the ability to see others mistakes but not your own -- exists for cognitive biases as well as for social biases. I'm not sure this is particularly interesting, though it's not trivial. Still, I doubt anyone in the field is even slightly surprised. However...

(2) The blind spot may be larger for people who score higher on cognitive abilities. The abstract is a little cautious here, but they're implying that there may be a relationship. This is kind of interesting if true. Note, however, that the claim in the abstract is that the *blind spot* is larger, NOT the susceptibility to actual cognitive biases. The distinction is important, because, towards the end of the abstract West et al say...

(3) "Additional analyses indicated that being free of the bias blind spot does not help a person avoid the actual classic cognitive biases". This is not entirely clear, and I really wish I had the actual paper. But it sounds a lot like they're saying that size of the bias blind spot is actually uncorrelated with susceptibility to cognitive bias.

Note that (2) and (3) are different, and together imply that there is no real correlation between cognitive ability and susceptibility to bias. And if there were such a correlation I'd have expected it to be referred to in the abstract, but there's nothing in the abstract that states this. The New Yorker summary blurs the distinction between the "blind spot" and "cognitive bias" which is rather unfortunate. For instance, this passage seems to be a misrepresentation...

"As they report in the paper, all four of the measures showed positive correlations, “indicating that more cognitively sophisticated participants showed larger bias blind spots.” This trend held for many of the specific biases, indicating that smarter people (at least as measured by S.A.T. scores) and those more likely to engage in deliberation were slightly more vulnerable to common mental mistakes."

... the quote from the original article is talking about the relationship between the blind spot and cognitive abilities. The surrounding text in the New Yorker is implying a relationship between cognitive bias ("common mental mistakes") and cognitive sophistication. I can't find anything in the abstract or the quotes from the paper that would support this interpretation.

Of course, the full paper might actually say something different. I hate trying to figure things out from the abstracts: they're always misleading in some respects, though never as bad as trying to guess from news articles.

[Full disclosure: A friend of mine has just finished running a similar study that looks at individual differences in susceptibility to cognitive biases. I won't discuss other people's work without their permission while it's under review, except to note that my their findings are broadly consistent with the West et al result (as it is described in the abstract, at least); but their paper has a slightly different focus so the two are not directly comparable.]
posted by mixing at 4:58 PM on June 12, 2012 [7 favorites]


Nobody's being smug in the article. Just here. (Which, as others have pointed out, isn't a bit surprising.)

he said, smugly.
posted by twoleftfeet at 4:59 PM on June 12, 2012


Sigh. Bad typo there on my "disclosure" statement... "my their".... the paper I'm hinting at is NOT mine. I've read it, but I didn't do any of the work and I'm definitely not an author.
posted by mixing at 5:00 PM on June 12, 2012


Why are we looking for breaking science news in the New Yorker anyway? I usually just use it when I'm a man-about-town in Manhatten and I need to know where to find a good wine and an evening's diversion on Broadway.
posted by twoleftfeet at 5:03 PM on June 12, 2012


Which half? The answer is either 1 or 47.

If you're being pedantic about which half, isn't the answer either 47 or 48?
His point was that the first half took 47 days to be covered and the second half took 1 day.
posted by dfan at 5:11 PM on June 12, 2012 [2 favorites]


Are smart people less likely to get them wrong? Are smart people's brains less likely to take those shortcuts?

See, this is bullshit. Because that's not what's really being tested here. What's really being tested here is, essentially, how easy is it to dupe people if you give them deceiving questions with little or no context. And surprise, humans are all pretty dupable! And I won't even go into what we really mean by "smart people" because it's quite clear that this study has a very (stupid) different definition than what is commonly understood. Frankly, I would reiterate my position that this is bullshit and no, this study, like so many other psychology studies, has not actually produced new new knowledge.
posted by nixerman at 5:14 PM on June 12, 2012 [1 favorite]


This is all about communication, isn't it?

No, it is not. The point of the study was not "they got the answer wrong, therefore they are stupid." the point of the study was "we know already that many people get these kinds of questions wrong, because of the built in short cuts the brain takes when solving problems. Are smart people less likely to get them wrong? Are smart people's brains less likely to take those shortcuts?

That's what they're testing.


The study is exploring whether someone can accurately interpret specific kinds of questions when they haven't been primed for careful reading. And lots of times the answer is no, which is interesting. The failure to understand is the blind spot that the researcher is studying, not the actual ability to solve the problem being posed. I suspect the number of people involved in the study who can't solve for X when 2X + 1 = 1.1 is vanishingly small.*

There are two parties involved in the study: the side producing the question, and the side producing the answer from their interpretation of the question. I totally get that the researcher is interested in the interpreter and why they tend to fuck up parsing meaning when words are arranged in certain ways. For whatever reason, however, I tend to be more interested in the question, who is generating it, why are they generating it, and how much control do they have over the outcome. (And I strongly believe the answer to the last question is, "A lot.")

*I haven't read the study and this could be wrong.
posted by jsturgill at 5:15 PM on June 12, 2012


> he said, smugly.

The adverb you want is "sadly."
posted by languagehat at 5:16 PM on June 12, 2012 [4 favorites]


I don't get the bat and the ball question, why isn't the ball 1 cent? Shouldn't the ball be anything less than a 56/54 split?
posted by geoff. at 5:21 PM on June 12, 2012


I suspect the number of people involved in the study who can't solve for X when 2X + 1 = 1.1 is vanishingly small.

I'm not sure what you meant, but the number of people who can solve a linear equation is small, though not vanishingly small. This has more to do with the horrid state of math instruction than anything else in this thread.
posted by twoleftfeet at 5:26 PM on June 12, 2012


I don't get the bat and the ball question, why isn't the ball 1 cent? Shouldn't the ball be anything less than a 56/54 split?
A bat and ball cost a dollar and ten cents. The bat costs a dollar more than the ball. How much does the ball cost?

The ball costs $0.05, the bat costs a dollar more, which is $1.05, and the two of them together cost $1.10.

I got them both right, but 1) I had seen them both before, because part of my being "smart" is having seen a thousand trick logic questions before, and 2) I would have known to look at them carefully anyway, because the article warned me. I dunno how I'd do on an actual test but hopefully item 1 would help me a lot. The whole thing seems kind of silly - so what if you can be fooled because the patterns that make you smart can get fooled if you specifically try to fool them? That's how optical illusions work too, and I wouldn't want to give up my ability to parse visual input instinctively. Same thing with the famous video of the gorilla amidst the basketballs.
posted by dfan at 5:29 PM on June 12, 2012 [1 favorite]


Oh the bat costs a dollar more than the ball, I read it as "more than the ball" without a dollar. Well, obviously I'm too smart to pay attention to such details.
posted by geoff. at 5:31 PM on June 12, 2012 [1 favorite]


I'm not sure what you meant, but...

Two times the price of the ball, plus a buck, equals one dollar and ten cents.
posted by jsturgill at 5:33 PM on June 12, 2012


I don't think the study says anything about "spotting gaps in logic and irrational conclusions and faulty reasoning". A cognitive bias isn't the same thing as a logical fallacy. When you make a logical fallacy you have violated the laws of logic. Cognitive biases have more to do with "laws" of perception and expectation, and violations of those are shared by both cognition and the particular situation.

Well, we ought not to blame the study authors for me being a verbose jackass. It would perhaps have been better to have said, "are people who are generally more intelligent more likely to think rationally?"

We ought not to blame the study authors for me being a jackass in general, which I think I have been a bit in this thread. I am sorry.
posted by Diablevert at 5:33 PM on June 12, 2012


Okay... let's do this, smart people! Answer the next two questions correctly and win a prize! You only have about 5 seconds to think about your answer and write it down on a piece of paper. Correct answers will be provided below. Ready? Get your pencils and paper ready, because you only have 5 seconds per question. Please record your answers quickly. Here come the questions, please be ready to give your first answer....
  1. You are running in a race and you pass the person in second place. What place are you?
  2. You are running a race and you pass the person in last place. What place are you?
Answers will appear below.
posted by twoleftfeet at 5:40 PM on June 12, 2012 [2 favorites]


My takeaway from this is the concept of intuitive thinking. In that regard the article is an interesting one. Why is it that our minds tend to leap quickly to the wrong answer on what are seemingly easy questions?

Leonard Mlodinow writes about this a bit in book, The Drunkard's Walk.

Remember the Monte Hall problem? I think it fits into this whole issue too.
posted by Rashomon at 5:44 PM on June 12, 2012 [1 favorite]


twoleftfeet, I got them right (I assume) in the allotted amount of time, but again, I had advance warning that they were tricky questions.
posted by dfan at 5:56 PM on June 12, 2012


Here are the answers:

˙ǝןqɐʌןosun sı ɯǝןqoɹd sıɥʇ ʇɐɥʇ sı ɹǝʍsuɐ ʇɔǝɹɹoɔ ǝɥʇ os ˙sɹǝɔɐɹ ɟo ʇuǝɯǝbuɐɹɹɐ ɹɐǝuıן ɐ uı ǝɹɐ noʎ uoıʇısod ɥɔıɥʍ ʍouʞ oʇ ǝןqıssod ʇou sı ʇı 'ʞɔɐɹʇ ɹɐןnɔɹıɔ ɐ uo (˙ʞɔɐɹʇ ʇɥbıɐɹʇs ɐ uo 'ʎpɐǝɹןɐ ǝɔɐןd ʇsɐן uı ǝɹɐ noʎ ssǝןun ǝɔɐןd ʇsɐן uı uosɹǝd ǝɥʇ ssɐd ʇ,uɐɔ noʎ) ǝsuǝs ǝʞɐɯ ʇ,usǝop uoıʇsǝnb ǝɥʇ ɹo 'ɹɐןnɔɹıɔ ǝq ʇsnɯ ʞɔɐɹʇ ǝɥʇ ʇɐɥʇ sı ɹǝʍsuɐ ʇɔǝɹɹoɔ ǝɥʇ(˙II
˙ǝɔɐןd ʇsɹıɟ uı uosɹǝd ǝɥʇ pǝssɐd ʇ,uǝʌɐɥ noʎ ˙ǝɔɐןd puoɔǝs (I

posted by twoleftfeet at 6:09 PM on June 12, 2012 [3 favorites]


Yeah, Monty Hall would seem to be an excellent example. Another one would be the incredibly long argument Ethereal Bligh and I had here with another MeFite here about planes taking off from conveyor belts. The Nameless MeFite just would not believe that a conveyor belt would barely change how a plane took off. No matter how hard we tried, no matter how many different angles we approached it from, he absolutely would not accept any answer other than 'plane can't take off'.... and constantly used the term 'Barcalounger pilots', with withering scorn, to boot.
posted by Malor at 6:10 PM on June 12, 2012


Rot13 would be easier, twoleftfeet. Urgh.
posted by Malor at 6:11 PM on June 12, 2012


I'm on device I can easily turn upside down. Sorry about you vertically anchored folks.
posted by twoleftfeet at 6:16 PM on June 12, 2012


twoleftfeet, I got them right (I assume)
I assumed incorrectly! I got the second one wrong. So... there you go.
posted by dfan at 6:16 PM on June 12, 2012


I suck at these sorts of questions. It's nice to know it's because I'm smart.
posted by Trochanter at 6:18 PM on June 12, 2012 [1 favorite]


Well of course a bunch of smartypants would think that.
posted by Saxon Kane at 6:20 PM on June 12, 2012


The gyroscope in my phone corrects the text display if i turn the screen to read twoleftfeet's answer. So instead I am trying read the answer while doing cartwheels to beat the gyroscope. If i can read 3 words each cartwheel before the gyroscope flips the screen and i cant read it; and i finish reading the answer after 17 minutes, how long did i pause in the middle to stop myself from vomiting?
posted by Nanukthedog at 6:21 PM on June 12, 2012 [4 favorites]


95¢
posted by Trochanter at 6:23 PM on June 12, 2012


You know, on further thought: this is the thread about planes on conveyor belts, for reference. Might be a good example of cognitive biases at work.

If you're not familiar with the problem, it's basically that a plane tries to take off on a conveyor belt, where the belt is sped up to move backwards at the same speed that the plane is moving forward. Can it take off? And the answer, of course, is "yes, easily", because the propeller pushes on the air, not the ground. The wheels on a plane just decouple it from the pavement; they exist to remove friction, not to provide propulsion. Moving the ground backward, matching the plane's airspeed forward, just means the wheels spin twice as fast. This results in a tiny bit more rotational friction in the wheel bearings, extra-sensitive nosewheel steering, and pretty much no other visible effect on the takeoff. If the tires can take the doubled rotational speed without damage, there's no problem at all.

However, there's some confusion about the way the question is worded in that specific instance, which that thread goes into. It is a wild excess of beanplating.

On twoleftfeet's questions, I got #1 right, but missed #2. I thought the tricky bit was that the first answer was arrived at differently than the second answer, not that the second question was either logically impossible or incomplete. Totally missed that. I just played it straight, and assumed I was now in next-to-last place.
posted by Malor at 6:31 PM on June 12, 2012


The gyroscope in my phone corrects the text display

Put the phone on the table and walk around to the other side of the table.
posted by twoleftfeet at 6:31 PM on June 12, 2012


Much as it's a bit insufferable at times, I still don't mind Lesswrong for looking at cognitive biases.

I think an interesting self-referential way of exploring whether people pick up on cognitive biases would be to integrate statistical errors using these biases into the article, and only correct them in the footnotes, to see whether a reader takes them as a given. That'd be more of a realistic situation.
posted by solarion at 6:31 PM on June 12, 2012


It's worth spending time doing something, as a job or a hobby, in which there's no margin for error -- and yet where no one is going to die if you screw up. I'm talking about tasks where it's very clear to you when you've failed.

Most of us work in fields where there's a lot of fuzziness and redundancy, which allow us to make mistakes and get away with them. For instance, if you write for a living, you can make a typo or two and nothing blows up. Which means there's a chance that you -- and even your editors -- won't notice the typo. That's not a bad thing. Slack is generally good. We don't want super-fragiles systems. But it's enlightening and character building to, at least for a limited time period, work without any slack.

Many video games are tough in this way. You can't kinda' shoot the alien. You either shoot him or you don't. Tightrope walking is an extreme example. One should probably practice with a net, but it's a cool activity, because you're either on the rope or you're falling, and when you fall it's clearly your own fault and no one else's.

In my experience, the best way to confront your cognitive biases is to try computer programming. Your program runs or it doesn't, and when it doesn't, it's because you made an error. I'm not claiming that programming will stop you from having biases. But it will force you to confront your fallibility and work through it.

Here's one of the best things I've ever read about being a programmer. It matches my experience:

"I no longer equate thinking I'm right about something with actually being right about it.

It's now very easy for me to entertain the thought that I may be wrong even when I feel pretty strongly that I'm right. Even if I've been quite forceful about something I believe, I'm able to back down very quickly in the face of contradicting evidence. I have no embarrassment about admitting that I was wrong about something.

That all came from decades of working in a discipline that mercilessly proves you to be mistaken a dozen times a day, but that also requires you to believe you're right if you're going to make any progress at all."

This, to me, is the heart of the quote. It's a great mental space to inhabit: "It's now very easy for me to entertain the thought that I may be wrong even when I feel pretty strongly that I'm right." It's taken me many years, but I've learned to be skeptical as soon as I get that "I'm positive I'm right" feeling. Skeptical without feeling like I'm a loser.

Unfortunately, our schools tend to train people in the opposite direction. Failure, in most educational settings, is a bad thing. Which is ass backwards. We learn best from failure, but not if we've internalized "getting an F means you're a loser" from years of academic indoctrination.
posted by grumblebee at 6:36 PM on June 12, 2012 [12 favorites]


Put the phone on the table and walk around to the other side of the table.

Silly! My table is against the wall. I can't see through the wall, now can I? Maybe you have some sort of an MRI device at your house to do that, but its cartwheels for those of us in the real world.

Geesh. Now the smart people are flaunting their wealth as well as their intelligence and cognitive bias.
posted by Nanukthedog at 6:41 PM on June 12, 2012


One of my favorite trick questions, which works best against people who are very mathematically knowledgeable, is the Two Trains Puzzle:
Two trains are on the same track a distance 100 km apart heading towards one another, each at a speed of 50 km/h. A fly starting out at the front of one train, flies towards the other at a speed of 75 km/h. Upon reaching the other train, the fly turns around and continues towards the first train. How many kilometers does the fly travel before getting squashed in the collision of the two trains?
This becomes tricky for the mathematically inclined, because the setup of the problem suggests summing a geometric series - the fly goes back and forth between smaller distances and the total distance is a sum of those. The trick though, is to just think about how long the fly will fly.

Famously, John Von Neumann, a very brilliant mathematician and and an exceptionally quick mental calculator, was posed this puzzle and he responded immediately with the correct answer. "So you know the trick then, John?" somebody asked. "Trick? What trick? You just sum a geometric series."

There might be some benefit to being smart, after all.

I'll put one more question out here. I like this question because I've posed it to many people, and generally the more mathematically sophisticated the person, the more difficult it is for them to answer. A little knowledge is a dangerous thing!

If you haven't seen this already, which row comes next?
     1
    11
    21
   1211
  111221
  312211 
posted by twoleftfeet at 6:49 PM on June 12, 2012


Oh for fuck's sake, fuck the fucking lily pads! What the hell is with this weird, "if-refute-the-premise-of-the-lillypad-question,Lehrer-is-destroyed!" theme in this thread.

It was a somewhat stupid question; given the purpose of the question, its stupidity is somewhat ironic. Some humans find irony to be amusing. Amused humans often behave in a manner they call "silly," perhaps in order to intensify and/or draw out said amusement.

Correct me if I've missed it, but I don't recall anybody here making the claim that that very slight iffiness of that single question cancels out all the serious research that this man has for whatever reason decided to dedicate 40 years of his life to.

People were having fun—math fun!—and you ruined it. Congratulations.
posted by Sys Rq at 6:50 PM on June 12, 2012 [1 favorite]


I just realized the Stack Overflow thread I linked to, above, is defunct. Here's some kind of cache of it.
posted by grumblebee at 7:03 PM on June 12, 2012


I knew the answers cuz of the dinks.
posted by Uther Bentrazor at 7:38 PM on June 12, 2012


A woman tells you she has two children. She later tells you a story about her daughter.
What are the odds that her other child is a boy?
posted by rocket88 at 7:54 PM on June 12, 2012 [1 favorite]


How many points do I get for getting both trick questions right? The correct answer is none, since I was tipped off to the fact that they were trick questions, which influenced my meta-reasoning.

For further reading:

The basic laws of human stupidity (see #2)

The Introspection Illusion

This awesome Car Talk Puzzler

For what it's worth, I teach middle school math, and I constantly tell my students that "That was easy" is the most dangerous idea you can have while doing math.
posted by alphanerd at 7:56 PM on June 12, 2012


The article's thesis would have been much more plausible if it had been written by Malcolm Gladwell.
posted by Wash Jones at 8:20 PM on June 12, 2012 [1 favorite]


The Introspection Illusion is often invoked to explain the bias blind spot. I forgot that Car Talk does puzzles, which at least suggests that the bizarre properties of cognition impact us far and wide, including our car repair.
posted by twoleftfeet at 8:22 PM on June 12, 2012


A woman tells you she has two children. She later tells you a story about her daughter.
What are the odds that her other child is a boy?


2 in 3: the possible combinations of children are boy/boy, boy/girl, girl/boy, and girl/girl. Eliminating the first as incongruent with the story, we have 3 possible combinations of kids, 2 of which contain a boy.
posted by explosion at 8:29 PM on June 12, 2012


A woman tells you she has two children. She later tells you a story about her daughter.
What are the odds that her other child is a boy?


Before anyone spends a lot of time thinking about this, I'll warn you that the Boy or Girl paradox is a notoriously thorny problem, and that conditional probability is not usually the way people think, so there's a minefield of cognitive bias there that can lead you astray.
posted by twoleftfeet at 8:35 PM on June 12, 2012


I'm pretty disappointed in those laws of human stupidity. I expected better from a site called "rationalwiki." Perhaps I shouldn't have.
posted by wobh at 8:42 PM on June 12, 2012


The study has flaws... but I think the underlying idea that smart people are sometimes overconfident in their abilities has merit. Sometimes the best answer is "I don't know, let me think about it for a while."
I also wonder about cultural and personality biases in the study. Would smart people with poor self-esteem do better because they'd be more cautious and aware of their failings. Was the study done in America where we do everything fast, fast food, fast cars, fast talking.. would the results be different in countries with a slower pace of living? What about urban vs rural. etc.
posted by j03 at 8:59 PM on June 12, 2012 [1 favorite]


explosion has it.
posted by rocket88 at 9:16 PM on June 12, 2012


Except... if someone comes up to you and says both "I have two children" then "Let me tell you about my daughter," I'd actually rate the chances the other is a boy at higher than 66%. Because in her mind she is not just communicating to you the sex of the child in question but is differentiating it from her other, the premise of the question is only valid if you treat the woman as a theoretical construct instead of a person. If she had two daughters she'd be more likely to say "one of my daughters." I think, at least. (This happens to me all the time with brainteasers.)
posted by JHarris at 9:34 PM on June 12, 2012


To put this in a broader context:

All of the problems in the New Yorker article involve some basic examples of quantitative reasoning/literacy - something that we know many people (including many university graduates) are bad at because it doesn't get taught. The ball and bat problem is a basic algebra problem; the lily pad problem relates to exponential growth; and anchoring bias is one of several common quantitative reasoning errors that are discussed in all the QL talks I've been to. Math education (in the US and Canada at least) seems to be, in general, focused on learning facts and algorithms by rote to the exclusion of learning understanding and quantitative reasoning. This is exacerbated by the fact that reasoning - real reasoning, not just being able to recognize a specific pattern of mathematical question and remember the "trick" for that particular type of problem - is hard to test for on a multiple choice standardized test. As folks have noted, the word problems in the New Yorker article don't present realistic enough situations for people to reason about; instead, they suffer from many of the problems described in this talk. But a larger issue perhaps more directly related to the FPP is that recognizing which problems one should apply certain mathematical techniques or reasoning to is a learned skill, and we apparently do a poor job of teaching it on the whole.

The problem seems to go beyond just educating students on quantitative reasoning. This clip (from at the beginning of this longer video on science education) shows Harvard graduates still holding basic misunderstandings about what causes the seasons. A similar question that I've been told many people get wrong is, when a plant, eg. a tree, grows, where does the material that forms its mass come from? That is, a plant is mostly carbon - where is it getting all that carbon from?

There are probably examples from other academic disciplines as well. Some of the problem with teaching this sort of understanding in the case of the science examples seems to be that there is something in each example that runs counter to people's previous life or academic experience. For example, plants get their carbon from the air, via photosynthesis. But the air doesn't feel like it has mass and could be converted to a solid thing - counterintuitive! So in order for students to learn/understand these "basic" science concepts or quantitative literacy tidbits, they first have to unlearn some false (or merely limited) understanding of the world. And unlearning something is very hard.

With quantitative literacy, well, people generally don't just naturally think algebraically, and as in the talk I linked to above, we often don't teach students how to pose or set up (real) mathematical/quantitative problems. And then there are cognitive bias effects, like the anchoring effect.

The scientific study itself seems to be about something kind of different though - related to cognitive biases, but maybe not so much related to the average person's difficulty in setting up an algebra problem. I haven't had a chance to read it yet, but I'm getting the impression that the New Yorker article is... maybe a bit confused? Or does the study actually consider inability to set up an algebra problem to be an example of a cognitive bias? Or does the study (or perhaps just the New Yorker article) erroneously assume that (at least the majority of) university students learned how to set up an algebra problem in their basic algebra classes, and are merely failing to apply this skill to the particular example problems?
posted by eviemath at 9:45 PM on June 12, 2012 [4 favorites]


A woman tells you she has two children. She later tells you a story about her daughter.
What are the odds that her other child is a boy?


The probability is either exactly zero or exactly one, because the event is already realized.
posted by ROU_Xenophobe at 9:56 PM on June 12, 2012 [3 favorites]


That's exactly the answer you should give if some asshole interviewer who has read too many magazine articles tries that shit.
posted by Artw at 10:02 PM on June 12, 2012


I'll put one more question out here. I like this question because I've posed it to many people, and generally the more mathematically sophisticated the person, the more difficult it is for them to answer. A little knowledge is a dangerous thing!

A little knowledge actually helped me with that one, I got it in a few seconds. Or maybe I'm just mathematically unsophisticated.
posted by L.P. Hatecraft at 11:26 PM on June 12, 2012


Because in her mind she is not just communicating to you the sex of the child in question but is differentiating it from her other ....

You are thinking like a Grice. It kind of frustrates me that these types of questions often seem engineered to violate one or another grice.
posted by benito.strauss at 12:08 AM on June 13, 2012 [1 favorite]


I haven't had a chance to read it yet, but I'm getting the impression that the New Yorker article is... maybe a bit confused? Or does the study actually consider inability to set up an algebra problem to be an example of a cognitive bias?
No, it's that smart people think they can get away with skipping setting up the algebra problem entirely because they think the answer is obvious.
posted by dfan at 5:10 AM on June 13, 2012 [2 favorites]


boy/girl: the 2/3 answer assumes an equal probability distribution of sex at birth, which is not the way it works, is it?

race problem: doesn't the answer to the second problem undermine the answer to the first, which should now be "I don't know"? Also, where does it say that I have actually entered the race? Maybe it passed my house, and I grabbed my shoes and decided to join in, first passing the last place runner.
posted by thelonius at 5:28 AM on June 13, 2012


No, it's that smart people think they can get away with skipping setting up the algebra problem entirely because they think the answer is obvious.

So it's my third option, that the study and/or the article erroneously assumes that "smart" (or at least, credentialed) people universally or near-universally know how to set up an algebra problem in the first place; and therefore assumes that the reason they don't set up the algebra problem is that they are skipping that step because they think the answer is obvious without the use of algebra? That's a very big and I think quite likely a verifiably false assumption (though I don't have any exact data).
posted by eviemath at 7:51 AM on June 13, 2012


She later tells you a story about her daughter

Does she actually refer to the child as "my daughter" when she's telling the story? Because in that case it is usually safe to assume the other kid is a boy. Or she'd have used "my youngest daughter" or something similar. This question isn't about statistics at all, it is about social conventions on how to talk about your offspring.
posted by DreamerFi at 8:06 AM on June 13, 2012


I admit it. I'm stupid. Hence the nick.
posted by Mental Wimp at 9:13 AM on June 13, 2012


I now fully agree with the premise of the article after seeing so many smart people here overthinking simple problems.
posted by rocket88 at 9:28 AM on June 13, 2012 [1 favorite]


part of measured intelligence is how the person adapts to the test-taking method. in these cases, i think 'smart people being stupid' might have more to do with how the subjects accommodate themselves to the test, more in the sense of predicting what the testing method is trying to do (the trick, say). a smart person will look at the lily-pad question and see that it tries to fake you out with simplicity and so will look for the trick. (though the same smart person can fall into a trap--even when he is vigilant for it--of overthinking the question.) but then the tree question, the variation of the answers with the 'anchor' has to do with exercising cues within the question, particularly for those who (like me) couldn't estimate the height of a tree on sight and don't have their own point of reference for defining a tall tree versus a short one.
posted by fallacy of the beard at 10:32 AM on June 13, 2012


I'll put one more question out here. I like this question because I've posed it to many people, and generally the more mathematically sophisticated the person, the more difficult it is for them to answer. A little knowledge is a dangerous thing!

If you haven't seen this already, which row comes next?

1
11
21
1211
111221
312211


13112221

Is that correct? And I do indeed suck at math.
posted by keep it under cover at 10:42 AM on June 13, 2012


No, it's that smart people think they can get away with skipping setting up the algebra problem entirely because they think the answer is obvious.

You should actually do both. It's easy to make an error setting up and solving an algebra problem and not notice it. That's why I tell students to check whether or not their answers make sense. You should be building up both your techniques and your intuition.
posted by benito.strauss at 12:25 PM on June 13, 2012 [1 favorite]


A woman tells you she has two children. She later tells you a story about her daughter.
What are the odds that her other child is a boy?


As the Wikipedia article linked above by twoleftfeet states, the probability that the other child is a boy in this case is actually 50/50 using the most natural assumptions e.g. that people are as likely to tell stories about daughters as sons. Statements of the problem where you can talk about "the other child" will generally give a 50/50 chance of that child being a boy. To clearly produce the 2/3 probability you need a statement like the one in the article:

Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys?
posted by tomcooke at 1:01 PM on June 13, 2012


You are thinking like a Grice. It kind of frustrates me that these types of questions often seem engineered to violate one or another grice.

They don't seem that way to me; rather, people make unwarranted assumptions and then backfill their argument that the question is wrong. Take the alternative approaches to the lilypad problem above:

Lilypads don't work that way/no lake is so big

That you know about, which may not include all possible lakes or lilypads. In any case, you were given simple declarative statements and asked to draw a conclusion, not evaluate the premises.

There's no way to know how full the lake was at the beginning of the 48th day (eg 95% full) so it's impossible to quantify

But you were told it takes 48 days to fill the lake, not that the lake would be filled on the 48th day. Sure, a period of exactly 48 days seems arbitrary, but so does perfectly exact doubling. This objection comes up in threads about economics all the time, where people say the discipline makes artificially simple assumptions about the world. Well, duh - so does a physics textbook, otherwise we'd do no science because we're not able to fit the entire known universe within the scope of a single calculation.

Which half? 47 days or 1

A smarter objection, which demonstrates that the question measures linguistic as well as quantative sophistication: if (conditional > temporal) then (conditional' > temporal?). It's implied in the question that that coverage is being evaluated from the temporal perspective at day 0. On a more rigorous footing, if the second half of the lake were implied (day 47>48), then you'd be giving up the distinction between half and whole and going into divide by 0 territory, which also admits of nonsensical answers such as '0 days; suppose the lake is half-covered, then it is already half-covered' or recapitulations of Zeno's paradox etc. etc.

You need prior knowledge of exponential growth

As long as you understand what the word 'double' means, you don't. If you've seen this sort of problem before then of course you'll be able to answer it faster, but unlike the Voight-Kamp test, reaction time isn't a factor.

I find it hard to accept complaints that people aren't properly taught how to do this sort of abstraction in the school system. All mathematical abstraction boils down to the fact that counting is independent of the things being counted - not everyone gets that, but I suggest that those who don't are suffering from cognitive deficiency rather than cognitive bias, whether innate or environmental in origin.
posted by anigbrowl at 2:59 PM on June 13, 2012 [1 favorite]



1. You are running in a race and you pass the person in second place. What place are you?
2. You are running a race and you pass the person in last place. What place are you?


Um, if the race might be held on a circular track, the answer in both cases is "undeterminable". Or do I misunderstand the questions?
posted by Mental Wimp at 2:06 PM on June 14, 2012


Well, duh - so does a physics textbook, otherwise we'd do no science because we're not able to fit the entire known universe within the scope of a single calculation.

Yes, or as G. E. P. Box put it: "All models are wrong, but some are useful."
posted by Mental Wimp at 2:07 PM on June 14, 2012 [1 favorite]




« Older "Niggas" in Practice   |   Albino Backwoods Newer »


This thread has been archived and is closed to new comments