The Art and Science of Choosing Likely Voters

The Art and Science of Choosing Likely Voters
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

On Wednesday Nate Silver posted a helpful table that compared registered voter and likely voter samples on seven recent national surveys, including both the "traditional" and "expanded" likely voter models reported ever day by Gallup.

He noticed that the polls "appear to segregate themselves into two clusters," on showing a 4-6 point difference between the likely and registered voter models and one showing essentially no difference:

The first cluster coincides with Gallup's so-called "traditional" likely voter model, which considers both a voter's stated intention and his past voting behavior. The second cluster coincides with their "expanded" likely voter model, which considers solely the voter's stated intentions. Note the philosophical difference between the two: in the "traditional" model, a voter can tell you that he's registered, tell you that he's certain to vote, tell you that he's very engaged by the election, tell you that he knows where his polling place is, etc., and still be excluded from the model if he hasn't voted in the past. The pollster, in other words, is making a determination as to how the voter will behave. In the "expanded" model, the pollster lets the voter speak for himself.

Nate offered several good reasons why the traditional likely voter models may be missing the mark this year, as well some reasonable suggestions of ways pollsters might check their assumptions. His bottom line, however, is that he considers the 4-6 point gap between registered and likely voters "ridiculous" and issued a "challenge" to the pollsters showing closer margins to "explain why you think what you're doing is good science."

Now I'm a fan of Nate's work at FiveThirtyEight.com and I share his skepticism about placing too much faith this year in more restrictive likely voter models that place great emphasis on past voting. But having said that, I think it's a bit unfair to imply that the models used by pollsters like Franklin & Marshall and GfK amount to bad "science."

The science and art of likely voter models is worth considering. I've long argued that political polling is a mix of both science and art (just check the masthead of my old blog), and no where is the "art" of this business more evident than in the way pollsters select likely voters. Whether it's the likely voter model or screen or decisions about what sort of sample to use or how to weight the results, pollsters typically make a series subjective judgements that are at best informed by science. One reason that no two pollsters use exactly the same "model" is that the science of predicting whether a given individual will vote is so imprecise.

As I wrote in my column earlier this week, likely voter models had their origins in a series of "validation" studies first done by pollsters in the 1950s, when they mostly interviewed respondents in person. Since the interviewer visited each respondent at home, they could easily obtain their name and address. After the election, pollsters with a sufficient resources could send their interviewers to the offices of local election clerks to look up whether each respondent had actually voted. Gallup used proprietary validation studies to help develop its traditional likely voter model, and the validation data collected by the University of Michigan's American National Election Studies (ANES) from the 1950s through the 1980s helped guide a generation of political pollsters.

Unfortunately, the ANES stopped doing validation studies in 1980, but the data is readily available online, I downloaded the 1980 survey and ran the cross-tabulations that follow. In 1980, ANES followed its standard practice, conducting an in-person interview with a nationally representative random sample of voters in October, then following up with a second interview with the same respondents after the election in November.

The following table shows results from questions asked before the 1980 election about whether the respondent was registered and whether they intended to vote, plus a question asked afterwards about whether they had actually voted. (A few caveats: first, the data shown here are unweighted, as I could find no documentation or weight variables in the materials online. Second, roughly 18% of the respondents are omitted from this table because the researchers could not firm their registration status. Third, obviously, the study is 28 years old, although a more recent validation study conducted by in Minnesota by Rob Daves, now a principal of Daves & Associates Research, yielded very similar findings).

The middle column represents respondents who were actually registered to vote, but had no record of voting in the 1980 general election. And no, that's not a typo. Eight-four percent (84%) of these confirmed non-voters said they planned to vote. Their answers were more accurate after the election, but still, nearly half (44%) of the non-voters claimed inaccurately a few weeks later that they had voted.

The far right column shows the respondents who were confirmed as non-registrants. Nearly a third (30%) told the interviewer that they were registered to vote during their first, pre-election interview, and 45% said they intended to vote. After the election one in five of those with no record of being registered to vote (21%) claimed they had cast a ballot.

These results are not unusual. They are broadly consistent with previous ANES studies. Collectively, they illustrate the fundamental challenge of identifying "likely voters." If you "let the voter speak for himself," he (or she) often overstates their true likelihood of voting. Looking back, many also claim to have voted when they have not -- something to keep in mind in looking at crosstabulations for out this week for those reporting they have voted early.

Now check the patterns by two additional questions about past voting and interest in the campaign. Again, you also see strong but imperfect correlations. Those who say they usually vote and who express high interest in the campaign tend to vote more often than those who do not.

Since voters tend to overstate their intentions, pollsters like Gallup (and most of the others in Nate Silver's table) typically combine questions about intent to vote, past voting, interest in politics and (sometimes) knowledge of voting procedures into an index. A respondent who says they are registered, plans to vote,has voted in all previous elections and is very interested in politics might get a perfect score. A respondent that reports doing none of those things gets a zero. The higher the score, the more likely they are to vote. [I should add: I'm giving you the over-simplifed, "made-for-TV-movie" version of how this typically works -- as per one of the comments below, Gallup and many others give "bonus points" to younger voters to try to compensate for their inability to say they've voted in previous elections].

Some pollsters (such as Gallup and others who use variants of their "traditional" model) will use that index to select the portion of their adult sample that corresponds to the level of turnout they expect (they use the index to screen out the unlikely voters). A few pollsters (CBS News/New York Times and Rob Daves when he conducted the Minnesota Star Tribune poll) prefer to weight all respondents based on their probability of voting. The table below (from my post four years ago on the CBS model) shows a typical such a scale used for this purpose based on the same 1980 validation data presented above.

So given all this evidence, why am I skeptical of more restrictive models? Look again at any of the tables above. Neither the individual questions nor the more refined index can perfectly predict which voters will turn out. For example, in the table above, more than a quarter (27.6%) of the voters with the lowest probability of voting -- those who would be disqualified as "likely voters" by most "cut-off" models -- did in fact vote in 1980. And almost as many of the voters scored with the highest probability of voting did not vote (that's one reason why I like the CBS model that weights all registered voters on their probability of voting seems rather than tossing out the least likely).

Still, the best any of these models can do, as SurveyUSA's Jay Leve put it in an email to me last week in describing his own procedures, is "capture gross changes" in turnout from year to year. "We believe," he continued, "no model in 2008 is capable of capturing fine changes" in turnout. I agree. I also fear, as I did four years ago, that models that try to closely "calibrate" to a particular level of turnout overlook the strong possibility that the respondents willing to participate in a 5 to 15 minute interview on politics are probably more likely to vote than those who hang up or refuse to participate. In other words, some non-voters have already screened themselves out before the calibration process begins.

The best use of these highly restrictive "likely voter models," in my view, is to determine when the level of turnout has the potential to affect the outcome of an election. Put another way, the likely voter models typically produce results that differ only slightly from the larger pool of registered voers. However, in relatively rare elections -- and 2008 appears to be such an example -- the marginal voters tilt heavily to one candidate. Surveys have been showing for months that Barack Obama stands to benefit if his campaign can help increase turnout among the kinds of registered voters that typically do not vote.

The fact that the likely voter models are producing inconsistent results, provides additional confirmation of that finding. As Nate Silver points out, some likely voter models (presumably the ones putting more emphasis on past voting) are showing closer results than other models that appear to be less restrictive. The problem is that determining which model is the most appropriate is not a matter of separating science from non-science, and the differences between them are sometimes subtle. Many of the presumably less restrictive models used by national pollsters (ABC/Washington Post and CBS/New York Times, for example) likely include at least some measures of past voting. The true margin that currently separates Obama and McCain probably falls somewhere in between these various "likely voter" snapshots.

Once the votes are counted, we will have a better idea which models are coming closest to reality. Either way, no single model can claim unique "scientific" precision. All involve judgment calls by the pollsters.

[Typo corrected]

Popular in the Community

Close

What's Hot