THE BLOG

Dispatches: Are Media Polls Criminally Bad?

02/19/2009 01:17 pm ET | Updated May 25, 2011

This post is part of Pollster.com's week-long series on Stan Greenberg's new book, Dispatches from the War Room.

Dispatches.jpg

Stan Greenberg's book is a "good read" from both a political and a polling perspective and I know fellow junkies will find it rewarding. But I want to start a conversation over a strong claim Greenberg makes on page 58 about the quality of polling. I'll quote the full paragraph, but interspersing my comments as we go.

The endgame in presidential campaigns brings out all sorts of irrationalities, starting with the media polls. Many are criminally bad.

Ok, that's provocative and got my attention. How so?

Some are done in one night with no time for callbacks and thus over-represent people who are easily reached by phone, often seniors.

In principle this is a problem but in practice not so much. Of 543 national polls in Pollster.com's data for 2008, only 6 have a one day field period, with another 76 having two days of interviewing. That's 15% which isn't trivial but certainly isn't representative of most polling. The most common field period in our data was 3 days, with 230 polls, and another 115 that did 4 days on interviewing. That's 64% of polls. At the state level there is more single day polling, a lot of it done by IVR (or "Robo-polls") which are still controversial in the polling profession. Of 1791 state polls in our data, 472 were done in a single day, or 26%. Another 204 were done in two days, for a combined share of 38%. So the complaint here is fairly well justified for state polls but not so much for national polling.

Greenberg continues:

They are not carefully weighted and, as a result, show wide swings in voter preference that the media interpret wrongly as voter fickleness.

I'd be curious what constitutes careless weighting. Almost all pollsters weight the data to demographic distributions derived from the Current Population Survey (a huge monthly government survey with over 90% response rate and therefore considered particularly reliable.) Pollsters might differ on some technical issues here but it is hard to believe that media polls are that different from Greenberg's own methods. None of the weighting techniques are in any way secret-- just buy a textbook on survey sampling or read the journals or attend panels at AAPOR (the pollsters conference) and the variety of options are all right there in the public domain. So there is little reason to think that variation in weighting practice is due to either secret knowledge that Greenberg and colleagues have that is unavailable to others, or that "media pollsters" systematically choose to be reckless by using poor weighting schemes. The one controversial area of weighting is whether to weight the data to some specific party identification distribution, derived either from past polling or from exit polls (which are themselves suspect on this, but I digress.) I don't know what Greenberg's position is on this, or if that is what he is referring to in this paragraph.

I would, however, strongly agree with the implication of the end of the sentence, that media interpretations of polling variation is far too quick to present random sampling noise as "real change" and to let that noise drive the narrative. I might disagree that this is a function of weighting, but I think this point is certainty right.

Continuing in the paragraph:

And they usually ask the respondent only for whom they will vote without any prior questions that build trust. With people reluctant to tell a stranger for whom they will vote without being warmed up, many of the media polls report an inflated number of undecided voters.

I think a review of the major network and newspaper polls will show that the vote question appears early in the survey, but very rarely as the first question and never the only question. This does however raise a valuable point of legitimate disagreement among pollsters: Is it better to remind a voter of a variety of issues and considerations about the candidates through your survey before asking the vote question, or is it better to get the vote response early before you have "primed" the respondent to be thinking of particular issues. For example, at Wisconsin an ongoing survey in the 1990s asked about the state of the economy as the warmup items, then asked presidential approval. We switched that to ask initial questions about interest in politics and attention to news as the warmups to avoid priming the respondents to think of the economy before they thought about presidential approval. Now some, perhaps Greenberg, would argue that asking a series of questions about current politics before the approval question (or vote in an election survey) is actually a good thing because those are the considerations voters are likely to carry with them to the polls. Others would point to evidence that the questions you prime will have greater influence over approval or vote response than questions that might matter to voters but which you happen not to bring up. I'd be interested in hearing more from Greenberg about his view of this debate.

Finally the paragraph concludes:

Worst of all, a poll that shows a result sharply different from all the others gets media attention because the difference is "news" when it is likely the result of normal sampling fluctuations or careless polling practices.

Amen! Our efforts to detect polling outliers and label them as such is an attempt to reduce the attention paid to these fluctuations. But outliers happen to every polling organization. If they don't then the pollsters are fiddling with the data in suspect ways, since sampling theory says you will produce a statistical outlier 5% of the time. If you don't then you are cheating, not being "better". The problem for journalists and public interpreters of polls is not to get rid of outliers or suppress their publication, but to recognize them as such and give an appropriate interpretation.

My bottom line is that polling techniques and methodology are "open source." Take classes in grad school and you can learn all the theory. Work in a polling firm and you'll also learn a lot of practical wisdom. Survey professionals all have access to this. Greenberg may well think that his polls are superior to those conducted by others, but I'd disagree that this has anything to do with secret knowledge or methods. I'd also question the implicit claim that campaign pollsters are better at conducting polls than are media pollsters. I am more sympathetic to a claim that campaign pollsters or advisors on policy as Greenberg's book illustrates, may be able to craft surveys that get at the ability of politicians to shape public response through how they talk about issues. Media (and academic) polls are seldom focused on this, and in that way are arguably more limited. That is a very interesting discussion that perhaps we can move on to.

More:

Pollster