THE BLOG
11/29/2006 05:45 pm ET | Updated May 25, 2011

House Districts vs. Poll Results: Part II

On Monday, I looked at how well our averages of polls in U.S. House Districts did in comparison to the unofficial vote counts, and when we averaged the averages, they compared quite well. A related and important question is how well those averages did within individual districts. How often did our House poll averages - sometimes conducted over a span of more than a month - provide a misleading impression of the eventual result on Election Day? In most cases the pre-election averages in House races coincided with the eventual results, but there were a handful of districts where those averages gave a misleading impression of the outcome of the race. The tougher question is whether that misimpression was the fault of the polls or of the combination of their timing and subsequent "campaign dynamics" that changed voter preferences.

That last point is important. Pre-elections polls attempt to be snapshots of voter preferences "if the election were held today." No one should expect a head-to-head vote preference question asked in the first week of October to forecast the outcome of an election held a month later. And as noted here previously, our final averages often included polls stretching back a month or more before Election Day. So consider today's discussion as much about the merits of averaging polls in House races as about the merits of the polls themselves.

Let's start with the averages that we posted on our House map and summary tables. We averaged of the last five polls in each district (including those conducted by partisans and sponsored by the campaigns or political parties). We then classified each race as either a "toss-up" or "lean" or "strong" for a particular candidate based on our assessment of the statistical strength of that candidate's lead.

We were able to find at least one poll in 87 districts, but only 34 with five or more polls. As such, the House race averages often spaned far more time than our statewide poll averages. The final averages were based on just over 304 polls, but 58 of those polls (in 38 districts) were conducted before October. More than a third of the polls used in the averages (124) were conducted before October 15. So it would not be surprising to see averages of these results produce misleading results in any district with a late trend.

In comparing the averages to the results, I see ten districts with "reversals" - districts that we had designated as "leaning" or better to one candidate while a different candidate prevailed. Specifically:

It is worth noting that all but two of these "reversals" were seats we classified as either "lean" Democrat or Republican (a lead beyond one standard error, but not two). That is to say, the lead of the ultimately unsuccessful candidate was relatively small, though obviously not small enough to rate "toss-up" status. The exceptions were New Hampshire-1 and Florida-13, which we had classified as strong Republican and strong Democrat respectively (based on average margins of 11.8% and 7.2% respectively).

Some of these reversals are explicable. For example, all of the public polls released in Ohio-15 and Kansas-2 were conducted prior to October 11, so it is entirely possible that those early surveys were right and that late trends moved the ultimate winner ahead by Election Day. Also, the results for Pennsylvania-4, Arizona-5, New Hampshire-1 all showed trends toward the ultimate winner. The polls in Florida-13 also showed a late trend to the current nominal leader, Republican Buchanan. In Nebraska-3 and Kansas-2, partisan polls with results highly favorable to their sponsors also helped skew the averages in what may have been a misleading direction.

Finally, as many readers know, the results from Florida-13 remain in dispute due to an unusually high rate of "under-votes" in one county that appear to result from a poorly designed layout of the touch-screen electronic voting equipment in that county. A compelling draft analysis by four political scientists (Frisina, Herron, Honaker and Lewis) argues that Democrat Christine Jennings would have prevailed but for the roughly 15,000 votes lost because of the touch-screen equipment.

I had anticipated some of these issues and, in a post just before the election, presented a variety of different "scorecards" based on applying various filters (only late polls, only independent polls, etc). At the time, the various alternative averages made very little bottom-line difference in terms of the number of seats we classified as leaning Democrat or Republican. For the sake of brevity, I will not go through every permutation, but the following table summarizes the number of reversals that would have resulted given various screens we could apply to the averages (that I described in my post on Monday).

11-29%20reversals.jpg

Not surprisingly, applying the various filters does reduce the number of "reversal" districts, those where one candidate led in the poll averages but another won. As we throw out early polls or those conducted by partisans, however, a different kind of "miss" increases, those where we miss a switch in party because no polls are available. Our rule on Pollster.com was to assume no change in party for districts with no polls available. However, had we included only independent polls conducted after October 15, we would have made the wrong assumption about four districts previously held by Republicans were Democrats prevailed: Florida-16, Kansas-2, New York-24 and Pennsylvania-7. So remarkably, the rate of "missed" outcomes is roughly the same regardless of the filter applied.

Of course, there are a few districts mentioned above where the reasons for a late "reversal" are not immediately apparent. I'll try to take up some of these, as well as the question of how some of the more prolific pollsters fared in a subsequent post.