How Tight is the Screen? (2010 Edition)

How Tight is the Screen? (2010 Edition)
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Nate Silver has an interesting catch this morning: On the most recent USA Today/Gallup poll, conducted roughly a week after health care reform passed the House (and 2-4 days after the presidential signing ceremony), both Democrats and Republicans expressed record levels of enthusiasm about voting in the mid-term elections. As Silver point out, the "big problem" for Democrats is that Republican enthusiasm -- 69% say they are "more enthusiastic than usual" about voting -- is still greater than for Democrats (57%).

As he points out, some of this jump is "very probably...a temporary bounce and will fade as memories of the health care legislation become more distant," but he concludes with a point worth discussing further:

What I wish the pollsters would do, actually, is to publish the percentage of people in each party who are screened out by their likely voter model. You don't have to tell us how you're doing it -- but at least let us know in broad strokes how much impact it's having. How much of Rasmussen Reports' apparent house effect, for instance, is because they're applying a likely voter screen when most other pollsters aren't, and how much of it is because there are some differences -- or bugs -- in other parts of their data collection and massaging routine? We shouldn't have to guess; this should be an easy thing for the pollsters to disclose.

That's half right. What pollsters can do easily, and do not do nearly often enough, is publish the percent of adults (or of registered voters) who they screen out with their likely voter questions or models. This has been a hobby horse of mine since I started asking pollsters about their likely voter models in 2004. I wrote a two-part series about it the context of primary polling in the summer of 2007 pushed harder for the percentage of adults that qualified in 2007 during the run-up to the Iowa caucuses and many times during the 2008 primaries.

That said, we need to take care with the percentage passing the screen statistic. Any pre-election survey probably includes at least some response bias toward genuinely likely voters -- truly unlikely voters are presumably more likely to hang up at some point regardless of how they answer a screen question -- so a sample of "adults" may begin with a slight skew to actual voters (though the actual evidence of this phenomenon is surprisingly thin). This is a complicated point, but if such response bias exists, we probably want the percentage of adults that pass the screen to be bigger than the actual turnout percentage among adults.

Silver is wrong, however, to say that it's an "easy thing" for all pollsters to publish the percentage screened out in each party. Yes, it would be relatively easy for pollsters using the sometimes controversial Gallup likely voter model, which typically begins with a sample of all adults, retains the answers to the party identification question for all adults, and then applies a filter and weighting to select and model a likely electorate (though Gallup's practice of weighting down a middle category of voters on-the-bubble between likely and not likely would complicate things a bit).

But it would be impossible for pollsters who screen for registered and/or likely voters at the beginning of the survey and terminate the interview with those who do not qualify to report the percentage of each party that pass the screen. They usually hang up before asking a party ID question. The same is true for pollsters who begin with samples drawn from official lists of registered voters. They might be able to tell you something about party registration of those who get screened out (in party registration states), but only to the extent that they were able to identify in the individual they talked to in each household before they ended the call. And they can tell you nothing at all about the party preferences of non-registrants and whatever statistics they produce would only be comparable with other similarly designed polls in the same state. Those two practices, the use of screening and list samples, apply to virtually all internal campaign polls and most media polls conducted at the state level.

What would make far more practical sense would be for all pollsters to publish the party composition of their likely voter sample. In other words, what percentage of likely voters identify as Democrats, Republicans or independents? Among the most prolific statewide pollsters, SurveyUSA, PPP and Research2000/DailyKos now routinely publish those results. Rasmussen Reports and Quinnipiac do not.

Popular in the Community

Close

What's Hot