THE BLOG

What Happened in NH? AAPOR's Answer

03/30/2009 10:55 am ET | Updated May 25, 2011

Most political junkies remember two things about last year's New Hampshire primary. First, Hillary Clinton's surprising three point win and the fact that the pollsters were the "biggest losers" as the final round off of pre-election polls had shown Barack Obama surging ahead. A dozen different surveys showed Obama leading by a range of 3 to 13 points, and by roughly six percentage points on our final trend estimate. Fewer remember that polling errors were even bigger in subsequent states and fewer still will recall that the American Association for Public Opinion Research (AAPOR) announced formation of an ad-hoc committee to study and report on the problems of the New Hampshire and other primary polls.

Well today, more than fourteen months after the 2008 New Hampshire primary, the AAPOR Ad Hoc committee has released its full report. While those hoping for an obvious smoking gun will be disappointed, the report represents a massive collection of information that does shed new light on what happened in New Hampshire. The evidence is spotty and frequently hedged -- "definitive tests" were "impossible" -- but AAPOR's investigators identify four factors as contributing to polls having "mistakenly predicted an Obama victory." From the AAPOR committee press release:

  • Given the compressed caucus and primary calendar, polls conducted before the New Hampshire primary may have ended too early to capture late shifts in the electorate's preferences there.
  • Most commercial polling firms conducted interviews on the first or second call, but respondents who required more effort to contact were more likely to support Senator Clinton. Instead of continuing to call their initial samples to reach these hard‐to‐contact people, pollsters typically added new households to the sample, skewing the results toward the opinions of those who were easy to reach on the phone, and who more typically supported Senator Obama.
  • Non‐response patterns, identified by comparing characteristics of the pre‐election samples with the exit poll samples, suggest that some groups who supported Senator Clinton--such as union members and those with less education--were under‐ represented in pre‐election polls, possibly because they were more difficult to reach.
  • Variations in likely voter models could explain some of the estimation problems in individual polls. Application of the Gallup likely larger error than was present in the unadjusted data. The influx of first-time voters may have had adverse effects on likely voter models.

In other words, what happened in New Hampshire wasn't one thing, it was a likely lot of small things, all introducing errors in the same direction. Various methodological challenges or shortcomings that might ordinarily produce offsetting variation in polls instead combined to throw them all off in the same direction. Polling's "perfect storm" did not materialize this past fall, but that label seems more apt for the New Hampshire polling debacle.

The report also produces evidence that rules out a number of prominent theories, among them the so-called "Bradley Effect." The authors claim they saw "no evidence that white respondents over-represented their support for Obama," and thus, no evidence of "latent racism" benefiting Clinton. Fair enough, but they do report evidence of a "social desirability effect" that led respondents to report "significantly greater" support for Obama "when when the interviewer is black than when he or she is white" (although Obama still led by smaller margins among when interviewers were white -- see pp 55-59 of the pdf report).

As should be obvious, this very quick and cursory review just scratches the surface of the information in the 123 page report. There is a story here about the sheer breadth of the information provided. For example, today's release also includes immediate availability through the Roper Archives of full respondent level data provided by CBS News, Gallup/USA Today, Opinion Dymamics/Fox News, the Public Policy Institute of California (PPIC), SurveyUSA, University of New Hampshire/CNN/WMUR for polls conducted in New Hampshire, South Carolina, California and Wisconsin.  [Update: I'm told that a small glitch in the documentation is holding up release of some or all of the Roper data until, hopefully, later today].

But aside from the admirable disclosure by the organizations listed above, there is also a story here about an outrageous lack of disclosure and foot-dragging, including three organizations that "never responded" to AAPOR's requests for information over the last fourteen months: Strategic Vision (for polls conducted in New Hampshire and Wisconsin), Clemson University and Ebony/Jet (for polls conducted in South Carolina).

Stay tuned. I will have more to say later today and in the days that follow on this new report. Meanwhile, please share your thoughts on the report in the comments below.

For further reading, see my first review of the theories for the New Hampshire polling flap, our bibliography of reaction around the web and the rest of our coverage from 2008.

Update: ABC's Gary Langer shares his first impressions including one thought I negelected to include:  "The volunteer AAPOR committee members who produced [the report], led by Prof.
Michael Traugott of the University of Michigan, deserve our great
thanks."

Interests disclosed:  As a member of AAPOR's Executive Commmittee from May 2006 through May 2008, I voted to create the Ad Hoc committee.  I did not serve on the committee but our Pollster.com colleague Charles Franklin did participate.