Do you like this post? a)Yes b)Of course c)How could I not? d)Maybe
November 8, 2008 3:26 AM   Subscribe

Rethinking Public Opinion - the immense importance of public opinion polling in American politics, and the under-reported problems at the heart of the enterprise, combine to call for a serious critique of the polling industry, its assumptions, and its method
posted by Gyan (40 comments total)
 
all you need do is check out which polls were way off, which were loaded to support a view, which ones were consistently right with the final results of the election:
Rasmussen seems fine and http://www.fivethirtyeight.com/ became everyone's reliable workhorse
posted by Postroad at 6:38 AM on November 8, 2008


But what do Penn and Teller think?
posted by fatllama at 6:47 AM on November 8, 2008


There was indeed a call for this. Luckily for us all, Nate Silver picked up.
posted by Tehanu at 6:48 AM on November 8, 2008


One of the inherent problems with all these polls I end up hearing about is that nobody ever asked me for my opinion.
If they didn't ask me, who else didn't they ask? Mostly everyone, right?
posted by CitrusFreak12 at 6:51 AM on November 8, 2008 [1 favorite]


I thought Nate Silver was crazy to predict 330+ electoral votes for Obama. Crazy like a fox!
posted by mecran01 at 7:08 AM on November 8, 2008 [2 favorites]


The Central Limit Theorem is not an "arcane theory."
posted by shadow vector at 7:13 AM on November 8, 2008 [2 favorites]


fivethirtyeight doesn't do it's own polls though - it's an aggregate of polls with some clever statistical tinkering.
posted by Artw at 7:20 AM on November 8, 2008


Like any other fool out there, when asked, I'll say just anything except what I'm really thinking. I know Big Brother is listening and watching.
posted by doctorschlock at 7:28 AM on November 8, 2008


Well, this is the most deeply incorrect thing I've read in a long time.

The idea that we abandoned the search for a "psychology of public opinion" is just, well, flatly incorrect. So deeply wrong that either the author is a liar, or so ignorant that his writings need no further consideration.

The psychology of public opinion is and has been, instead of abandoned, a hotbed of inquiry. There are several competing models of how people arrive at the words coming out of their mouths. One of the larger camps is the online theory, in which for some responses you store an attitude that you update as necessary and throw away the information that caused you to update it -- presidential approval, maybe; you remember that you don't like Bush but not the information that led you to it. The other main camp is memory-based, in which your opinion does not exist until you are asked for it, and then you construct it on the fly out of whatever's floating around in the top of your head.

To put that differently, one of the main theories of where public opinion comes from incorporates directly a critique the author makes of public opinion research.

It is also the case that there is a vast amount of research done on how the process of polling itself influences the resulting statements that are gathered. Question wording effects, question order effects, priming effects, interviewer effects and so on are not just assumed away, they are an active object of study.

It is also the case that while most popular-press polling does not use them, polling need not be an instant snapshot of opinion with no ability to look at how individual opinion changes. Panel studies, where you go back to the same sample multiple times over time, are common enough in academic settings.

The author fundamentally does not understand what the margin of error means. A 3% MOE means only that we can be 95% confident that if we actually gave everyone the exact same survey, the value we got back would be within 3 percentage points of the one we have here. That is all.

Anyway, this is a big pile of steaming claptrap. It is not an accident that the author does not cite a single example of modern research on public opinion, since the existence of the various research questions themselves gives the lie to the author's claims.
posted by ROU_Xenophobe at 7:35 AM on November 8, 2008 [8 favorites]


I read 538 every day leading up to the election, and it seems like the polls were really good over all, accurately predicting states. Obama got slightly lower scores they he ended up with, most likely due to his turnout game, and the fact that it got so many new voters (and thus, not 'likely' voters) to the polls.

Also, I was polled extensively before the 2004 Iowa Caucus (I had rather foolishly put my phone number on a list for one of the candidates, and it got around), a couple times before the '08 caucus and once before the 2008 general election.
posted by delmoi at 8:56 AM on November 8, 2008


A very good idea is Deliberative democracy where once the legislature reaches a decision the various opinions within the legislature appoints advocates. These advocates then debate the decision before some large jury, over 100 randomly selected citizens, who must approve or decline the decision and likely may edit it (cut pork). You can then replace the presidential veto with merely the power to nominate a presidential advocate (including speaking himself).

In practice, deliberative opinion polls are used to find out what ordinary people would support if they knew enough information.
posted by jeffburdges at 9:02 AM on November 8, 2008


The author fundamentally does not understand what the margin of error means. A 3% MOE means only that we can be 95% confident that if we actually gave everyone the exact same survey, the value we got back would be within 3 percentage points of the one we have here. That is all.

Actually, I think it is pollsters who misunderstand the definition of "margin of error". The term should roll up the practical real world errors that might be encountered, but all it really means is theoretical best case. To put it another way.. Margin of error should mean "it can't be less accurate than this", but in reality it means "it can't be more accurate than this".
posted by Chuckles at 9:10 AM on November 8, 2008


Well, I don't mean "in reality", I mean in the field of opinion polling.
posted by Chuckles at 9:13 AM on November 8, 2008


I was polled several times leading up to the 2008 election, but the poll I was happiest to be included in was one to gauge George W. Bush's approval rating about two years ago.

The pollster asked whether I "approved" or "disapproved" of the job Bush was doing. I asked, "Do you have 'strongly disapprove'?" They did! I said, "Put me down for that, then."

Apparently that option was only available by special request.
posted by isogloss at 9:17 AM on November 8, 2008


One of my favorite polls from this year was an Elway poll on this year's Washington gubernatorial race. In Washington, candidates can declare themselves to "prefer" whatever party they want and the Republican candidate chose use "GOP Party" instead of "Republican Party".

Half of the respondents were asked about "Democrat Christine Gregoire" versus "Republican Dino Rossi" and the other half were asked about "Christine Gregoire, who prefers the Democratic Party" and "Dino Rossi, who prefers GOP Party." The Republican Rossi trailed 51-41, while the GOP Rossi trailed 48-44.
posted by milkrate at 9:19 AM on November 8, 2008 [1 favorite]


I just rechecked 538 - and they really did call the popular vote very well (they called 52.3 to 46.2 - and the first article I found online gives 52 to 46, probably rounded). They had projected only 348 electoral votes for Obama, because they miss-projected Indiana and the one Nebraska part-win electoral vote. But while they didn't really call Missouri, they seem to have put it in the McCain side anyways (as it went). I'm really impressed. I was impressed through most of the election, but the closeness of their popular vote call just got me going "whoa".
posted by jb at 9:20 AM on November 8, 2008


"it can't be less accurate than this", but in reality it means "it can't be more accurate than this".

That doesn't make sense. If the outside boundaries are established, it can be more accurate simply by chance.
posted by weapons-grade pandemonium at 9:41 AM on November 8, 2008


There was indeed a call for this. Luckily for us all, Nate Silver picked up.

One of the reasons 538 is so awesome is that Poblano is very good at assessing and weighting the polls and pollsters themselves.
posted by oneirodynia at 9:41 AM on November 8, 2008


Whoops, by "Poblano" I of course mean Nate Silver.
posted by oneirodynia at 9:41 AM on November 8, 2008


If you read the FAQ, it's not just basic statistical tinkering that Nate does at fivethirtyeight. He not only weighs polling agencies according to historical accuracy scores, he also introduces his own (and I think brilliant) demographic data to bear in the analysis -- which is how he predicted Obama wins in Iowa and North Carolina when everyone was expecting Hillary blowouts.

I was a bit skeptical of his models for a while but they sure have been vindicated.
posted by chimaera at 9:54 AM on November 8, 2008


I just rechecked 538 - and they really did call the popular vote very well (they called 52.3 to 46.2 - and the first article I found online gives 52 to 46, probably rounded).

In fact, the latest count (scroll down) puts it at 52.6 to 46.1. Scary accurate.
posted by EarBucket at 10:01 AM on November 8, 2008


I'd pay money for a Nate Silver market report.
posted by mecran01 at 10:09 AM on November 8, 2008


This from a website that apparently thinks we need to re-thing the statistics behind global warming, as well.

"The polar bear is 'the iconic example of the devastating impacts of global warming on the Earth’s biodiversity,' according to attorneys at the Center for Biological Diversity. How can that be, if there are more polar bears alive today than there have been in decades?"

I dunno. How can that be?
posted by Devils Rancher at 10:13 AM on November 8, 2008


sorry -- re-think -- like I apparently need to do before psotting.
posted by Devils Rancher at 10:14 AM on November 8, 2008


i predicted 53% to Strabo (McCain)
47% to Apollo (Obama)
but that was before this artifical economic "Crisis"
bang-on in reverse.
Polls are the greatest measure human political stupidity.
i love them.
posted by clavdivs at 10:30 AM on November 8, 2008 [1 favorite]


Margin of error should mean "it can't be less accurate than this", but in [the field of opinion polling] it means "it can't be more accurate than this".
That doesn't make sense. If the outside boundaries are established, it can be more accurate simply by chance.
Right, but that kind of "accuracy" has no significance. Pollsters don't even make that claim.. The only claim made is the range, and the probability of falling within that range. The problem I'm asserting is that the range given by pollsters is often taken to imply a worst case, rather than the best case that it really is.
posted by Chuckles at 10:56 AM on November 8, 2008


Well, not right.. My whole point is that they aren't outside boundaries at all. They are inside boundaries.
posted by Chuckles at 10:58 AM on November 8, 2008


If you think domestic polling (on elections, etc) may be unreliable, you should look closer at war-zone polls (and other sample-based surveys, no matter how scientific-sounding).

"Burnham et al. [Lancet 2006] used a design based on the World Health Organization's Expanded Programme on Immunization (EPI) cluster survey design (WHO 1991). The EPI was created to allow for inexpensive surveys of immunization rates in areas where lists of households do not exist. This approach has been strongly criticized by the survey statistical community for more than 10 years, because it does not provide known probabilities of selection and therefore cannot produce unbiased estimates. [...] Lepkowski expressed concern that 'the poor practice of the EPI simple cluster sampling method is now being used as a standard for inexpensive surveys on other health topics.' This concern is demonstrated here by its use for estimating mortality." (David A. Marker, Methodological Review of "Mortality After the 2003 Invasion of Iraq: A Cross-Sectional Cluster Sample Survey")
posted by internationalfeel at 11:07 AM on November 8, 2008


-How can that be, if there are more polar bears alive today than there have been in decades?"
-I dunno. How can that be?


The IUCN Polar Bear Specialist Group reclassified the polar bear as a vulnerable species on the IUCN's Red List of Endangered Species at their most recent meeting (Seattle, 2005). They reported that of the 19 subpopulations of polar bears, five are declining, five are stable, two are increasing, and seven have insufficient data on which to base a decision. On May 14, 2008, the U.S. Department of the Interior reclassified the polar bear as a Threatened Species under the Endangered Species Act, citing concerns about sea ice loss. Canada and Russia list the polar bear as a "species of concern."

This and more from Polar Bears International. And now back to your regularly scheduled polling thread...
posted by sophist at 11:47 AM on November 8, 2008


But what do Penn and Teller think?

They think shouting, swearing, reducing complex issues to sub-moronic soundbites, finding one idiot they can mock, and being assholes about it is identical to being freethinkers, as always.
posted by Astro Zombie at 11:55 AM on November 8, 2008 [13 favorites]


Astro Zombie, you complete me.
posted by Marisa Stole the Precious Thing at 12:07 PM on November 8, 2008


I thought the Penn and Teller sketch showed something very profound, which they seemed to entirely miss, or else willfully ignore.

They didn't demostrate that polling is useless - they showed beautifully how polling can help you shape your political framing in order to win support for your ideas. If you support immigrants, than you don't talk about "spending" on immigrants, you talk about "denying" concrete things like emergency health care or education for children.
posted by jb at 12:15 PM on November 8, 2008


Nate Silver does more than just weight demographic data and average out pollsters, from what I understand. He set up the polling data into his Baseball engine, and one of the key factors is that it is foward looking.

Polling people by telephone by definition cannot tell the future, they only tells the present sentiment. Similarly, prediction markets (e.g. InTrade and the stock markets), do somewhat discount the future, as it is the expected future value, but in reality the majority of the value is the present value. This is why Nassim Taleb loves to hate on prediction markets, something as large and social as wars have not been accurately priced and predicted in these markets.

What Nate's baseball engine does (and I assume his election polling as well), is use nearest neighbor analysis of prior data. Assume all prior election polling data are saved into this computer program and weighted by accuracy. Now assume it is the month of October. From what I understand, he will take the current polling data and match it to October data (or any months at all) of prior election years and the prediction engine will see what tends to happen afterwards. This is why he says in his writing that the model assumes tightening at the end of the race, because that's what has historically happened in October.

By matching the current polling sentiment to prior movements, he can accurately predict results because it is grounded in the ways that have historically moved. Unfortunately, the tightening didn't happen because the model cannot predict things it hasn't seen before. Nonetheless, this is significantly more accurate than polling current sentiment. In fact, nearest neighbor analysis is used for things like Amazon and Netflix recommendations (it recommends things which other people tend to watch after watching whatever you recently rented).
posted by amuseDetachment at 2:00 PM on November 8, 2008


Since we're talking about election polling instead of opinion polling (the topic of the original post)... I, too, followed 538.com and was impressed with its accuracy, but the Princeton Election Consortium's model did better, and it did so using "polling data alone, without [538's] complex corrections."

Nate's Baseball engine is awesome, but the results show that it's needlessly complex when predicting well-polled American presidential elections.
posted by sdodd at 5:39 PM on November 8, 2008


Nate's Baseball engine is awesome, but the results show that it's needlessly complex when predicting well-polled American presidential elections.

Based on your sample trial of n=1?
posted by drpynchon at 10:07 PM on November 8, 2008


> Based on your sample trial of n=1?

No, based on the analysis in that "Princeton Election Consortium's model did better" link.

Perhaps I should have said "results suggest?" I was going to add something like, "of course, further research is warranted," but I figured that goes without saying.
posted by sdodd at 10:24 PM on November 8, 2008


I'm not sure whether looking at hundreds of election models out there, and figuring out post-hoc which one is luckiest is the right way to go about it.
posted by amuseDetachment at 11:02 PM on November 8, 2008


If they didn't ask me, who else didn't they ask? Mostly everyone, right?

I got polled. Twice. They were both computerized systems, but they were real polling organizations.
posted by krinklyfig at 11:20 PM on November 8, 2008


Well, I was probably a good candidate to get polled anyway. I'm registered non-party in a swing state.
posted by krinklyfig at 11:22 PM on November 8, 2008


Sorry, sophist, I was being all sarcastic and stuff.
posted by Devils Rancher at 7:22 PM on November 10, 2008


« Older the snob gobbler   |   Geared Steam Newer »


This thread has been archived and is closed to new comments