I had intended to post a "quick" summary of what Tuesday night's results say about how the polls did, but like a thread pulled on a sweater, my outline kept getting longer. So apologies for the delay in getting this summary posted. What follows is a review of how the polls performed this year, with a closer look at the question posed yesterday by our own Brian Schaffner, was it "a victory for IVR polling?"
New Jersey. Our final trend estimate based on all pre-election polls was dead even, with each major party candidate receiving 42.0% of the vote and independent Chris Christie 10.1%. Christie had a one-point lead on the RealClearPolitics average of the last five non-partisan polls (+1.0%), roughly the same margin as using our more "sensitive" trend line (+1.1%).
The unofficial count, as of this writing, has Christie leading by 4.3% (though as noted yesterday, all of these unofficial results are likely to change slightly as provisional and absentee ballots are counted). So the average polling error in New Jersey was between 3.3% and 4.3% depending on the average. Nate Silver did a compilation of comparable New Jersey polling errors (compared to final averages) on 9 previous elections that ranged from a low of 0.5 to a high of 4.8. So the error yesterday, while higher than average, fell well within recent experience.
At the same time, nearly everyone has noticed that the average of the final polls from three organizations using an automated methodology (sometimes refered to as "interactive voice response" or IVR) had Christie ahead by four percentage points (46% to 42%) -- roughly the same as his unofficial margin -- while the last three live-interviewer telephone polls had Corzine leading by an average of one point (41% to 40%)
As I wrote on Monday night, what makes that gap between automated and live-interviewer polling interesting is that it was not some random fluke on the last few polls, but persisted throughout the campaign to a degree that we did not see in Virginia this year or in most states during the 2008 presidential election. My conclusion was that the consistency in the estimate of Corzine's vote on so many recent polls suggested a looming "incumbent effect," that voters had largely made up their mind on Corzine but that a small but critically important number were still weighing whether to support Christie or Daggett. So, the theory goes, the IVR polls did better by removing the live interviewer and simulating a secret ballot, thus pushing voters harder to make a choice and more accurately recording their true intentions over the phone.
And what happened to Daggett? Our final trend estimate had him at 10%, but he received only 5.8% of the vote. Although it had been rising until mid-October, Daggett's support ultimately followed the traditional pattern. Many voters that had been intrigued by his candidacy ultimately concluded that their votes would be wasted and opted to support either Christie or Corzine. The Fairleigh Dickinson Unversity poll provided a hint of where Daggett's support was heading in an experiment conducted on their last survey: They found that Daggett received just 6% -- the same number he won on election day -- when they only named Corzine and Christie as candidates but accepted Daggett as a volunteered choice. When they offered a three-way-choice that included Daggett, his support jumped to 14%.
Virginia. Republican Bob McDonnell's victory in Virginia was never in doubt during the final weeks of the campaign, so political junkies were less obsessed with the polling numbers, but the polling errors in Virginia were, on average, about the same as in New Jersey. Our final trend estimate had McDonnell ahead by 13.4% (54.7% to 41.0%). The unofficial tally has McDonnell leading by 17.4% (58.7% to 41.3%) so the error, as of this writing, averages 3.7 points on the margin.
In Virginia, the gap between the results of automated and live interviewer polls was not nearly as big or as consistent as in New Jersey. The average of the final automated polls in Virginia conducted by PPP, SurveyUSA and Rasmussen had McDonnell at 56% compared to 54% on the final polls in the last week conducted by five organizations using live interviewers, while both sets of poll gave Democrat Creigh Deeds an average of 41% of the vote. However, the final automated polls by SurveyUSA and PPP along with the live interviewer survey by Virginia Commonwealth University are closest to the final margin (as of this writing).
New York City. Our final trend estimate had Mayor Michael Bloomberg leading Democratic challenger William Thompson by a 14-point margin, (53.1% to 39.0%), but Bloomberg won by less than five (50.6% to 46.0%) so the polling error is large (9 points on the margin) -- roughly the same as the infamous New Hampshire polling debacle).
What happened? Marist pollster Lee Miringoff describes it as a "text book case of pre-election poll analysis:"
It is not unusual in contests between a well-known incumbent (Bloomberg) and a relatively unknown challenger (Thompson) that the incumbent ends up getting pretty much the same number he was attracting in pre-election polls. Undecided voters tend to find the challenger or not vote at all, having already rejected the incumbent.
He refers, of course, to the "incumbent rule," a subject I speculated about at length in 2004, only to see it generally not apply that year, in close races in 2006 or 2008. That said, it does appear to have returned in New Jersey and New York City on Tuesday.
But that apparent reemergence raises an important question: If the rule is no longer a "rule," but rather a phenomenon that occurs only occasionally, how do we know to expect it? Miringoff wrote yesterday that Marist's polls "showed the trend that Democratic voters were 'coming home' to Thompson." That result would have been a helpful warning sign. Problem is, I can't find any reference to it in Marist's final poll release. Instead, I find this prediction: "If today were Election Day," they wrote on Wednesday without qualification, "Mayor Michael Bloomberg would handily win a third term."
If anyone deserves to say "I told you so" in New York, it is Thompson pollster Geoff Garin, who released a survey last week showing Thompson gaining (he said), trailing by only 8 points (38% to 46%) and by only 3 points (41% to 44%) among those who said they were certain to vote. The release prompted Bloomberg spokesman Howard Wolfson to retort that it "gives new meaning to the term margin of error." Not exactly. (And yes, we managed to miss this poll and omit it from our chart -- apologies to Garin and our readers for that oversight).
I asked Garin for his thoughts and he agrees that "undecideds split against incumbent" in the New York race and that such a split was knowable in advance, but argues:
[I]t is stupid to think they would split 100 to nothing. There was a high undecided in NYC because voters were cross pressured -- they did not want to reward Bloomberg for his bad behavior on term limits, but they didn't know enough about Thompson to know whether he would be up to the job.
Garin also thinks their sample made a difference:
I think the main reason we did better and the public polls were off is that we worked off the voter file, and were persnickity about who we took into what was very likely to be a low turnout election. Even among whites, the smaller the turnout scenario the better for Thompson. I am sure the public polls let in too many people.
Maine Question 1. Polling on the gay-marriage referendum was far more limited -- just seven public polls released over the course of the campaign -- and the complicated ballot language and the error prone nature of prior referenda poll warned us to expect the unexpected. Yet while the differences between the final polls were relatively small, it is worth noting that the automated survey from PPP was the only one that showed more support for the anti-gay marriage position than opposition. Our final trend estimate showed the No side (pro gay marriage) with a two point lead (49.4% to 47.1%) but Question 1 won by nearly six (52.8% to 47.2%).
While this one experience is far from a conclusive test, there are at least theoretical reasons to think that automated surveys have an advantage in measuring true preferences on issues like gay marriage, where the presence of a live interviewer might introduce some "social discomfort" that would make the respondent reluctant to reveal their true preference.
* * *
So were automated IVR polls the big winners on Tuesday, as Mickey Kaus, Taegan Goddard and PPP"s Tom Jensen argue? If what you care about most is predicting the winners, it is clear that the automated surveys provided a more accurate gauge of the outcome, especially in New Jersey where the closer simulation of the secret ballot probably gave us a heads up of an imminent "incumbent rule" effect favoring Christie. SurveyUSA also deserves credit for coming closer than most pollsters to the final margin in New Jersey, Virginia and New York City.
But that said, consider that we count on polls to do much more than predict the outcome. In addition to the points raised by Brian Schaffner here yesterday, consider two things:
First, as a live-interviewer media pollster pointed out to me yesterday, there were some inconsistencies with subgroups, particularly by race. As the table below shows, despite relatively small sample sizes, the three automated surveys showed Republicans Christie and McDonnell winning a greater percentage of the African American vote than the final live-interviewer surveys and the exit polls (though there were a few inconsistencies; namely Rasmussen in New Jersey and Marist in New York City).
If you believe the exit poll result, then the automated surveys provided a generally misleading sense of whether the Republican candidates were about to make bigger inroads than they did among African-American voters (consider also commenter RussTC3"s observation about big differences between job approval ratings as measured by PPP and the exit polls -- as Mike Mokrzycki reminds us we do polls for reasons other than predicting the outcome).
Second, there is one last contest we need to review....
New York 23. Although three last minute polls on the special election in New York's 23rd Congressional District conducted after Republican Dede Scozzafava withdrew from the race last Saturday showed Conservative Doug Hoffman leading Democrat Bill Owens by margins of between 5 and 17 points, Owens prevailed by 4 points (49.0% to 45.9%). Whatever shortcomings we might identify in the polling, the far bigger error was the interpretation applied by pundits, most notably me, who foolishly assumed that the trend in Hoffman's direction was unstoppable and that normal assumptions about last minute developments would apply. In retrospect, it is obvious that there was nothing normal about the last 72 hours of this particular campaign.
Moreover, we should have paid closer attention to the evidence of growing voter uncertainty in the final Siena Research Institute poll. Their final survey, conducted on Sunday night, showed Hoffman with modest but not quite statistically significant lead (41% to 36%) but also a doubling of the undecided (from 9% to 18%) in just a few days. So their poll showed that voter uncertainty was surging at a time when it is usually nonexistent. To his great credit, Siena pollster Steven Greenberg also argued that Owens might still gain from the Scozzafava endorsement on Sunday since "most voters are not political junkies" and had not yet heard the news" (an argument I boldly dismissed since few undecided voters had a favorable impression of Scozzafava -- apologies to Greenberg for that).
But while we might plausibly reconcile the results of the Siena poll with the outcome, the PPP survey is another story. While their estimate of Owens' support (34%) was within a few points of the other polls, PPP had Hoffman receiving five percentage points more support (51%) than he ultimately received (45.9%). A late shift among the undecided voters cannot explain the difference.
I am planning to look more closely at this example, but the important point for now is that while the automated polls turned in a strong performance in New Jersey, Virginia and Maine, the PPP poll in NY-23 was highly misleading.
The larger lesson is this: Automated polls have been maligned, unfairly in my view, as inherently "unreliable." Yet when it comes to predicting election outcomes they continue to prove, NY-23 aside, at least as reliable as surveys done by conventional means. In New Jersey this week, they were more accurate in predicting the winner. At the same time, however, it would be wrong to jump to the opposite conclusion and place inherently greater trust in all automated surveys, especially when used for purposes other than predicting election outcomes.
All polls have their limitations. Rather than trying to divide them into two categories, "reliable" and "crap," we might do better to try to understand their limitations and interpret the results we see accordingly.