10/16/2006 06:20 pm ET Updated Dec 06, 2017

Handicapping the House: Part I

With the addition of House race data to, it is a good time to talk about the difficulty of measuring the status of the race to control Congress at the district level. Political polling is always subject to a lot of variation and error (and not all of it the random kind), but Congressional district polls have their own unique challenges.

First, we are tracking something different in terms of voter attitudes an preferences than in other races, particular contests for President. Two years ago, voters received information about George Bush and John Kerry from nearly every media source for most of the year. Huge numbers of voters tuned in to watch live coverage of nationally televised candidate debates. In races for the Senate and House, news coverage is far less prevalent and voters pay considerably less attention until the very end of the campaign. Even then, voters still get much of their information about House candidates from paid television and direct mail advertising.

Of course, in the top 25 or 30 House races, the candidates (and political parties) have already been airing television advertising. However, if you expand the list to the next 30-40 races that could be in play, the flow of information to voters drops off considerably. Middle-tier campaigns in districts in expensive media markets (like New York or Chicago) will depend on direct mail rather than television to reach voters.

So generally speaking, voter preferences in down ballot races are more tentative and uncertain. The (Democratic affiliated) Democracy Corps survey of Republican swing districts released last week reported 26% of likely voters saying there is at least a "small chance" they may still change their minds about their choice for Congress. When they asked the same question about the presidential race in mid-October 2004, only 14% said they saw a "small chance" or better of changing their mind about voting for Kerry or Bush.

This greater uncertainty means that minor differences in methodology can have a big impact on the results. Specifically, pollsters may vary widely in terms of the size of the undecided they report depending on how hard they push uncertain voters.

Second, the mechanics of House races polling can be very different from statewide methodology. The biggest challenge involves how to limit the sample to voters within a particular House district. In statewide races the selection is easy. Since area code boundaries do not cross state lines, it is easy to sample within individual states. So most of the statewide polls we have been tracking use a random digit dial (RDD) methodology that can theoretically reach every voters with a working land line telephone.

No such luck with Congressional districts, whose gerrymandered borders frequently divide counties, cities, even small towns and suburbs. Since very few voters know their district numbers, pollsters use a variety of strategies to sample House districts. Most of the partisan pollsters, as well as the Majority Watch tracking project, use samples drawn from lists of registered voters (sometimes referred to as "registration based sampling" or RBS). These lists make it easy to select voters within a given district, but the lists frequently omit telephone numbers for large numbers of voters (typically 20% to 40%30% to 50%**). Remember the real fear that RDD surveys are missing cell-phone-only households? Right now the missing cell phone households represent roughly 6-8% of all voters. Lists, obviously, miss many more. If the uncovered households differ systematically from those with working numbers on the lists, a bias will result.

Again, most partisan pollsters (including my firm) are comfortable sampling from lists, because the benefits of sampling actual voters within each District appear to outweigh the risks of coverage bias (see the research posted by list vendor Voter Contact Services of a sampling of arguments in favor of RBS). Media pollsters are generally more wary. SurveyUSA, for example, conducted a parallel test of RDD and RBS in a 2005 experiment that found a large and consistent a bias in RBS sampling that favoring one candidate. "SurveyUSA rejects RBS as a substitute for RDD," their report read, "because of the potential for an unpredictable coverage bias." So in House polls they often use RDD and screen for voters in the given district based on voters' ability to select their incumbent member of Congress from a list of all members of Congress from their area.

These various challenges have made many media outlets and public pollsters wary of surveys in House races. As of two week ago, we had logged more than 1,000 statewide polls for Senate or Governor into our database for 2006. As of yesterday, we had tracked only 173 polls conducted in the most competitive House races, but as the table below shows, only 47 of those came from independent media pollsters using conventional telephone methods

Nearly half of all the House race polls come from two automated pollsters: SurveyUSA (23) and especially the Majority Watch project of RT-Strategies and Constituent Dynamics (56). Also, more than a quarter of the total (52) are partisan surveys conducted by the campaigns, the party committees or their allies, with far more coming from Democrats (44) than Republicans (8).

The sample sizes for House race surveys are also typically smaller. While national surveys typically involve 800 to 1000 likely voters, and statewide surveys 500 to 600, many of the House polls involve only 400 to 500 interviews (although the Majority Watch surveys have been interviewing at least 1000 voters in each district).

Finally, very few districts have been surveyed by public pollsters more than a few times since Labor Day. Only two of the 25 seats now held by Republicans rated as "toss-ups" by the Cook Political report have been polled 5 or more times. Most of these critical seats have been polled 2 to 4 times. Put this all together, and the results are likely to be more varied and more subject to all sorts of error than other kind of political polls. After the 2004 election, SurveyUSA put together a collection of results for every pre-election public opinion poll released in the U.S. from October 1 to November 2, 2004. Their spreadsheets included 64 House race surveys, and their calculations of the error of each survey indicate that those few House races had more than double the error on the margin (5.82) than the polls conducted in the presidential race (3.43).

All of which goes to say that while we too will be watching the House polls more closely over the next three weeks, for all the tables and numbers, we know far more about these races than meets the eye.   More on what we do know tomorrow.

**Correction: Colleagues have emailed to point out that quoted match rates for list samples have improved in recent years and now typically range from 60% to 80%. I won't quarrel, although I have had past experiences where quoted rate exaggerated the actual match once non-working numbers are purged from the sample.