11/08/2012 08:10 pm ET | Updated Nov 09, 2012

Who's The Best Pollster Of 2012? Not So Fast

WASHINGTON -- A day or two after an election, those who follow politics closely like to ask, "Who was the best pollster?"

That's a great question, but a tough one to answer even under ideal circumstances. Trying to answer it right now, with millions of votes still uncounted, is misleading at best.

On Tuesday, however, Costas Panagopoulos, director of Fordham University's Center for Electoral Politics, published just such a ranking of 28 organizations that had produced a national survey in the closing weeks of the campaign. Not surprisingly, news sites like TPM and The Washington Post quickly rushed out reports that reproduced the list and focused on the pollsters at the top.

Here are three reasons why this sort of ranking falls flat.

First, the Panagopoulos rankings (which we haven't reproduced here) are based on just one national survey from each organization (or combination of pollster and sponsor). Together, those surveys produced a relatively narrow range of results (see below). For example, 14 of the 28 polls gave President Barack Obama nominal leads of 1 to 3 percentage points; 23 of the 28 ranged between a tie and a four-point Obama advantage.


Second, the calculation makes no allowance for the margin of sampling error -- that is, the range of expected random variation that for most of the surveys was at least plus or minus 3 percent on the estimate of support for each candidate. As a result, many of the polls on this list could have found a margin of 1 or 2 or 3 points favoring Obama purely as a matter of chance.

Yet a percentage point or two is all that separates the "best" pollsters on the list from those who rank in the middle. As of this writing, Obama leads Mitt Romney in the popular vote count by 2.4 points (50.4 percent to 48.0 percent). Not surprisingly, the polls that showed Obama winning by a 2-point margin -- from Public Policy Polling, Ipsos and YouGov -- ranked at the top of the Panagopoulos list (he released no specific scores for each pollster or estimates of the precision of the scoring).

Third and most important, the votes are still being counted. As of this writing, there are still millions of uncounted provisional and absentee ballots in Ohio, Florida, Arizona, California and many other states.

On the Thursday after the 2008 election, just over 121 million votes had been counted, giving Obama a 6-point lead over Sen. John McCain (52.3 percent to 46.3 percent). But those totals did not include nearly 10 million additional votes that were later included in the official certified count, boosting Obama's margin over McCain to 7.4 points (52.9 percent to 45.5 percent).

If Obama's margin over Romney shows a similar increase this year, it could easily rise to 3 or even 4 percentage points, which will cause the pollsters that Panagopoulos currently ranks as "best" to fall to the middle and other organizations to rise to the top.

That's exactly what happened four years ago. Panagopoulos rushed out a ranking on the day after the election and then revised and republished it later when the count was complete. The result? The six "best" pollsters in his initial ranking were displaced by six entirely different "best" pollsters a few months later.

If we want to assess how the national pollsters did on the basis of their final surveys alone, a far better approach is to determine whether those polls did what they were supposed to do, which was to capture the actual result within their expected margin of error. That sort of comparison can be seen in the chart below (created by George Washington University political scientist John Sides).


The chart shows not just how far each final poll's estimate varied from Obama's current percentage of the two-party vote (ballots cast for either him or Romney), but also the range of each poll's margin of error. According to the chart, nearly every poll performed as it should have, which is to say Obama's current percentage falls within its margin of error. (Remember, however, that the vertical line representing Obama's actual vote will likely move slightly to the left once all the ballots are counted.)

Scoring pollsters on the basis of their final poll has other problems as well. Some of the surveys on the Panagopoulos list, such as the Associated Press/GfK survey, were conducted more than a week before the election. As the HuffPost Pollster trend chart shows, those polls likely missed a final-week uptick for Obama captured by most of the daily tracking surveys.

Moreover, polling firms can sometimes be influenced by the results reported by other polling firms; they may herd together, especially as an election ticks down. That means the very last survey may not be the best indicator of the overall quality of a pollster's work.

Given all the issues, naming a "best pollster" in this way is not a good idea. But it's not at all premature to begin to assess the performance of pollsters generally or to identify the firms whose results were especially problematic.

For example, while the averages compiled by HuffPost Pollster and the other polling aggregators were correct in forecasting Obama the winner of the key swing states, HuffPost's averages understated the president's victory margin by 2 to 3 percentage points in Wisconsin, Nevada, Iowa, New Hampshire and Colorado (as of this writing, based on the current AP vote count).


That pattern may be connected to the consistent house effects demonstrated by certain pollsters throughout the fall campaign. The chart below was compiled by Simon Jackman on Oct. 24, two weeks before the election, but the relative ranking of the pollsters showing the most pronounced house effects did not change significantly over the last two weeks.


Pollsters like Rasmussen Reports, Gravis Research and Gallup produced results that were consistently more favorable to Romney throughout the fall. While additional analysis is necessary, the big house effects from these especially prolific pollsters help to explain the understatement of Obama's vote by polling aggregators.

Crowning a single "best pollster" now on the basis of just one poll from each organization is a futile and misleading effort. For the moment, assessments of pollster error should focus instead on those that missed the mark by wide margins.

CORRECTION: An earlier version of this story credited the margin-of-error chart to YouGov. In fact, it was created by George Washington University political scientist John Sides and later tweeted about by YouGov.


The Ultimate Election Night Gallery