Generic House vs. National Vote: Part II

Generic House vs. National Vote: Part II
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

So how did national estimates of the "generic" House vote compare to the national vote for Congress? We learned in my last post on this topic that the national House vote is being counted and is not yet set in stone. My estimate of the Democratic victory margin (roughly 7 points, 52% to 45% ) is still subject to change. The survey side of the comparison is even murkier, with an unusually wide spread of results among likely voters on the final round of national surveys.

To try to make sense of all the numbers, we need to revisit the "generic" House vote and its shortcomings. By necessity as much as design, national surveys have made no attempt to match respondents to their individual districts and ask vote preference questions that involve the names of actual candidates. Instead, they have asked some version of the following

If the elections for Congress were being held today, which party's candidate would you vote for in your Congressional district -- The Democratic Party's candidate or The Republican party's candidate?

The problem is that the question assumes that respondents know the names of the candidates and can identify which candidate is a Democrat and which is a Republican. Such knowledge is rare, even in competitive districts, so most campaign pollsters consider it a better measure of the way respondents feel about the political parties than a tool to measure actual candidate preference.

In 1995, two political scientists -- Robert Erikson and Lee Sigelman -- published an article in Public Opinion Quarterly that compared every generic House vote result as measured by the Gallup organization from 1950 to 1994 to the Democratic share of the two-party vote (D / D + R). Among registered voters, when they recalculated the results to ignore undecided respondents, they found that the generic ballot typically overstated the Democratic share of the two party vote by 6.0 percentage points, and by 4.9 percent for polls conducted during the last month of the campaign. When they allotted undecided voters evenly between Democrats and Republicans, they fund a 4.8 point overstatement of the Democratic margin, and a 3.4 point overstatement in polls taken during October (See also Charles Franklin's analysis of the generic vote, and also the pre-election Guest Pollster contributions by Erikson and Wlezien and Alan Abramowitz that made use of the generic ballot and other variables to model the House outcome).

Two years later, Gallup's David Moore and Lydia Saad published a response in Public Opinion Quarterly. They made the same comparison of the total House vote to the generic ballot "but included only the final Gallup poll results before the election -- poll numbers that are closest to the election and also based on likely voters" (p.605). Doing so, they reduced the Democratic overstatement from 3.4 points in October to an average of just 1.28 percentage points. In 2002 the Pew Research Center used their own final, off-year pre-election polls from 1994 and 1998 to extend that analysis. Their conclusion:

The average prediction error in off-year elections since 1954 has been 1.1%. The lines plotting the actual vote against the final poll-based forecast vote by Gallup and the Pew Research Center track almost perfectly over time.

Last year, Chris Bowers of MyDD put together a compilation of the final generic House ballot polls from 2002 and 2004 "conducted entirely during the final week" of each respective campaign. When I apply the calculations used by the various POQ authors to the 2002 and 2004 final polls (evenly distributing the undecided vote), the average Democratic overstatement was smaller still -- roughly half of a percentage point in 2000 and 2004.

Which brings us to the relatively puzzling result from this year. The following table shows the results for both registered and likely voters for the seven pollsters that released surveys conducted entirely during the final week of the campaign. The most striking aspect of the findings is the huge divergence of results among likely voters. The Democratic margin among likely voters ranges from a low of 4 percentage points (Pew Research Center) to a high of 20 (CNN).

Not surprisingly, the results show a much smaller spread when we look at the larger and more comparable sub-samples of self-identified registered voters. And some of this remaining spread comes from the typical "house effect" in the percentage classified as other or unsure. As we have seen on other measures, the undecided percentage is greater for the Pew Research Center and Newsweek (and Fox News among likely voters), less for the ABC News/Washington Post survey.

If we factor out the undecided vote by allotting it evenly, and compared to my current estimate of the actual two party vote (with the big caveat that counting continues and this estimated "actual" vote is still subject to change), an interesting pattern emerges:

The results of three surveys -- Gallup/USA Today, Pew Research Center, and ABC News Washington Post -- fall well within the margin of error of the current count. The average result for these three surveys understates the Democratic share of the current count by about a half a percentage point. The likely voter models used by these surveys also show the usual pattern -- a narrower Democratic margin among likely voters than among all registered voters.

But three surveys -- CNN, Time and Newsweek -- show big overstatements of the Democratic vote, roughly 5 percentage points on average. And none of these three show the usual narrower Democratic margin among likely voters than among all registered voters. On the CNN survey, the likely voter model actually increases the Democratic margin.

It is not immediately apparent why the likely voter models of those three surveys yielded such different results, although as always, the precise details of the mechanics used on the final surveys were not publicly disclosed. Other than the general information some of these pollsters have provided previously, all we know for certain is the unweighted number of interviews classified as "likely voters" by each organization, and that information is not helpful in sorting out the differences As indicated in the table below, each pollster identified roughly two-thirds of their initial sample of adults as likely voters.

What can we make about this unusual inconsistency? The overstatement of the Democratic margins among registered voters is generally consistent with past results, but the wide spread of results of likely voters is far more puzzling. The difference in the behavior of the likely voter models looks like a big clue, but without knowing more about the mechanics of the models employed by each pollster, conclusions are difficult.

Popular in the Community

Close

What's Hot