10/04/2006 07:26 am ET | Updated May 25, 2011

Tennessee Polls: Not Created Equal

Instapundit Glenn Reynolds asked a good question yesterday:

THE LATEST POLL shows a Ford-Corker dead heat.

Hmm. Just yesterday we had one with Ford up by 5; not long before that there was one with Corker up by 5. Is it just me, or is this more variation than we usually see? Are voter sentiments that volatile (or superficial)? Or is there something about this race that makes minor differences in polling methodology more important? Or is this normal?

At the moment at least, I agree with the answer he received later from Michael Barone that the poll numbers in Tennessee do not appear unusually volatile. Barone pointed out that the results of nearly all the Tennessee polls this year appear to fall within sampling error of the grand average. That point is worth expanding on, but it is also worth noting that the averages conceal some important differences among the various Tennessee surveys.

First, let's talk about random sampling error. If we assume that all of the polls in Tennessee used the same mode of interview (they did not), that they were based on random samples of potential voters (the Internet polls were not), that they had very high rates response and coverage (none did), that they defined likely voters in exactly the same way (hardly), that they all asked the vote question in an identical way (close, but not quite) and that the preferences of voters have not changed over the course of the campaign (no again), then the results for the various polls should vary randomly like a bell curve.


Do the appropriate math, and if we assume that all had a sample size of roughly 500-650 voters (most did) than we would expect these hypothetically random samples to produce a results that falls within +/- 4% of the "true" result 95% of the time. Five percent (or one in twenty) should fall outside that range by chance alone. That is the standard "margin of error" that most polls report (which captures only the random variation due to random sampling. But remembering the bell curve, most of the polls should cluster near the center of the average. For example, 67% of those samples should fall within +/- 2% of the "true" value.

Now, let's look at all of the polls reported in Tennessee in the last month, including the non-random sample Zogby Internet polls:


As it happens, the average of these seven polls works out to a dead-even 44% tie, which helps simplify the math. In this example, only 1 the 14 (7%) results falls outside the range of 40% to 48%44% (that is 44%, +/- 4%.). And only 4 3 of 14 (28%21%) fall outside the range of 42% to 46% (or 44%, +/- 2%). So as Michael Barone noted, the variation is mostly what we would expect by random sampling error alone. Considering all the departures from random sampling implied above, that level of consistency is quite surprising.

These results may seem more varied than in previous years partly because the samples sizes are considerably smaller than the national samples of (typically) 800 to 1000 likely voters that we obsessed over during the 2004 presidential race.

The confluence of the averages over the last month (or even over the course of the entire campaign, as Barone noted) glosses over both important differences among the pollsters and some real trends that the Tennessee polls have revealed. Charles Franklin helped me prepare the following chart, which shows how the various polls tracked the Ford margin (that is, Ford's percentage minus Corker's percentage). The chart draws a line to connect the dots for each pollster that has conducted more than one survey. The light blue dots are for pollsters that have done just one Tennessee survey to date.


The chart shows a fairly consistent pattern in the trends reported by the various telephone polls, both those done using traditional methods (particularly Mason-Dixon) and the automated pollster (Rasmussen). Franklin plotted a "local trend" line (in grey) that estimates the combined trend picked up by the telephone polls (both traditional and automated). The line "fits" the points well: It indicates that Ford fell slightly behind over the summer, but surged from August to September (as he began airing television advertising).

As Barone noticed, the five automated surveys conducted since July (including one by SurveyUSA) have been slightly and consistently more favorable to Ford than the three conventional surveys (to by Mason-Dixon and one by Middle Tennessee State University). But the differences are not large.

The one partisan pollster - the Democratic firm Benenson Strategy Group - released two surveys that showed the same trend but were a few points more favorable to Democrat Ford than the public polls. This partisan house effect among pollsters of both parties for surveys released into the public domain is not uncommon.

But now consider the green line, the one representing the non-random sample surveys of Zogby Interactive. It tells a completely different story: The first three surveys were far more favorable to Democrat Ford during the summer than the other polls, and Zogby has shown Ford falling behind over the last two months while the other pollsters have shown Ford's margins rising sharply.

This picture has two big lessons. The first is that for all their "random error" and other deviations from random sampling, telephone polls continue to provide a decent and reasonably consistent measure of trends over the course of the campaign. The second is that in Tennessee, as in other states we have examined so far, the Zogby Internet surveys are just not like the others.

UPDATE: Mickey Kaus picks up on Barone's observation that the automated polls have been a bit more favorable to the Democrats in Tennessee and speculates about a potentially hidden Democratic vote:

Maybe a new and different kind of PC error is at work--call it Red State Solidarity Error. Voters in Tennessee don't want to admit in front of their conservative, patriotic fellow citizens that they've lost confidence in Bush and the GOPs in the middle of a war on terror and that they're going to vote for the black Democrat. They're embarrassed to tell it to a human pollster. But talking to a robot--or voting by secret ballot--is a different story. A machine isn't going to call them "weak."

Reynolds updates his original post with a link to Kaus and asks whether the same pattern exists elsewhere.

Another good question, although for now our answer is incomplete. We did a similar "pollster compare" graphic on the Virginia Senate race over the weekend. The pattern of automated surveys showing a slightly more favorable result for the Democrats was similar from July to early September, but the pattern has disappeared over the last few weeks as the surveys have converged. In Virginia, the most recent Mason-Dixon survey has been the most favorable to Democrat Jim Webb.

While we will definitely take a closer look at this question in other states in the coming days and weeks, it is worth remembering that most of the "conventional surveys" in Tennessee and Virginia were done by one firm (Mason-Dixon), while most of the automated surveys to date in Tennessee have been done by Rasmussen. As such, the differences may result from differences in methodology other than the mode of interviewer among these firms (such as how they sample and select likely voters or whether they weight by party as Rasmussen does).

[Missing "+/-" signs restored]

Subscribe to the Politics email.
How will Trump’s administration impact you?