AAPOR's Report: Why 2008 Was Not 1948

AAPOR's Report: Why 2008 Was Not 1948
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

As someone who writes about polling methodology, I consider last week's report from the American Association for Public Opinion Research (AAPOR) on the mishaps in the New Hampshire and other primary election polling last year manna from heaven. Republican pollster David Hill was right to call it "the best systematic analysis of what works and what doesn't for pollsters" in decades. The new findings and data on so many aspects of polling arcania, from "call backs" to automated-IVR polls, is invaluable, especially given that the AAPOR researchers lacked access to all of the public polling data from New Hampshire or the three other states they focused on.

But that lack of information was also important. Valuable as it is, the report was also was hindered by a troubling lack of disclosure and cooperation from many of the organizations that played a part in what even prominent pollsters described as an unprecedented "fiasco" and "one of the most significant miscues in modern polling history."

Last week, the Wall Street Journal's Carl Bialik summed up the problem:

Just seven of 21 polling firms contacted over a year ago by the American Association for Public Opinion Research for the New Hampshire postmortem provided information that went beyond minimal disclosure -- such as data about the interviewers and about each respondent.

Last year, two days after the New Hampshire primary, I wrote a column reminding my colleagues of the investigation that followed the 1948 polling debacle that created the infamous "Dewey Beats Truman " headline (emphasis added):

[A] week after the [1948] election, with the cooperation of virtually every prominent public pollster, the independent Social Science Research Council (SSRC) convened a panel of academics to assess the pollsters' methods. After "an intensive review carried through within the span of five weeks," their Committee on the Analysis of Pre-election Polls and Forecasts issued a report that would ultimately reshape public opinion polling as we know it.

[...]

[SSRC Committee] members moved quickly, as their report explains, out of a sense that "extended controversy regarding the pre-election polls ... might have extensive repercussions upon all types of opinion and attitude studies."

The American Association for Public Opinion Research "commended" the SSRC effort and urged its member organizations to cooperate. "The major polling organizations," most of which were commercial market researchers competing against each other for business, "promptly agreed to cooperate fully, opened their files and made their staffs available for interrogation and discussion."

But that was 1948. Things were different last year.

On January 15, 2008, AAPOR announced it would form an ad-hoc committee to evaluate the primary pre-election polling in New Hampshire. Two weeks later, it announced the names of the eleven committee members. They convened soon thereafter and decided to broaden the investigation to include the primary pre-election polls conducted in South Carolina, California and Wisconsin (p. 16 of the report explains why). On March 4, 2008, AAPOR President Nancy Mathiowetz sent a six page request to the 21 organizations that had released public polls in the four states, including 11 that had polled in New Hampshire.

The request (reproduced on pp. 83-88 of the report) had two categories: "(1) information that is part of the AAPOR Standards for Minimal Disclosure and (2) information or data that goes beyond the minimal disclosure requirement." The first category included items typically disclosed (such as survey dates, sample sizes and the margin of error), some not always available (including exact wording of questions asked and weighting procedures) and some details that most pollsters rarely release (such as response rates). The second category of information beyond minimal disclosure amounted to the 2008 equivalent of "opening of files" from 1948. Specifically, they asked for "individual level data for all individuals contacted and interviewed, records about the disposition of all numbers dialed, and information about the characteristics of interviewers.

The Committee had originally hoped to complete its report in time for AAPOR's annual meeting in May 2008, but by then as committee chair Michael Traugott reported at the time, only five firms had responded to the request (the first to respond, Mathiowetz tells me, was SurveyUSA which provided a complete, electronic data files for the two states they polled on April 8, 2008). In fairness, many of the pollsters had their hands full with surveys in the ongoing primary battle between Barack Obama and Hillary Clinton. Nevertheless, when I interviewed Traugott in May, he still hoped to complete the report in time for the conventions in August, but as cooperation lagged, the schedule slipped once again.

By late November 2008, with the elections completed, some firms had still not responded with answers to even the "minimal disclosure" questions asked back in March. At that point, Mathiowetz tells me, she filed a formal complaint with AAPOR's standard committee, alleging violations of AAPOR's code of ethics. Since the standards evaluation committee has not yet completed its work, and since that committee is bound to keep the specifics of such complaints confidential, Mathiowetz could not provide further details. However she did say that some pollsters supplied information subsequent to her complaint that the Ad Hoc Committee included in last week's report.

So now that the report is out, let's use the information it provided to sort the pollsters into three categories:

The best: Seven organizations, CBS News/New York Times, the Field Poll, Gallup/USA Today, Opinion Dynamics/Fox News, Public Policy Institute of California (PPIC), SurveyUSA and the University of New Hampshire/CNN/WMUR provided complete "micro-data" on every interview conducted. These organizations lived up to the spirit of the 1948 report, opening up their (electronic) files and, as far as I can tell, answering every question the AAPOR committee asked. They deserve our praise and thanks.

The worst: Three organizations -- Clemson Unversity, Ron Lester & Associates/Ebony/Jet and StrategicVision -- never responded.

The rest in the middle: Eleven organizations -- American Research Group (ARG), Datamar, LA Times/CNN/Politico, Marist College, Mason-Dixon/McClatchy/MSNBC, Public Policy Polling (PPP), Rasmussen Reports, Research 2000/Concord Monitor, RKM/Franklin Pierce/WBZ, Suffolk University/WHDH and Zogby/Reuters/C-Span -- fell somewhere in the middle, providing answers to the "minimal disclosure" questions but no more.

The best deserve our praise, while those that provided evaded all disclosure deserve our scorn. But what can we say about the pollsters in the middle?

First, remember that their responses met only the "minimal disclosure" requirements of AAPOR's code of ethics. They provided the "essential information" that the pollsters should include, according to AAPOR's ethical code, "in any report of research results" or at least "make available when that report is released." In other words, the middle group provided information that pollsters should always put into public domain along with their results, and not months later or only upon request following an unprecedented polling failure.

Second, consider the way that minimal cooperation cooperation hindered the committee's efforts to explain what happened in New Hampshire, especially on the question of whether a late shift to Senator Clinton in New Hampshire explained some of the polling error there. That theory is popular among pollsters (yours truly is no exception), partly because of the evidence -- most polls finished interviewing on the Sunday before the primary and thus missed reactions to Clinton's widely viewed "emotional" statement the next day -- and partly because the theory is easier for pollsters to accept, as it lets other aspects of methodology off the hook. The problem wasn't the methodology, the theory goes, just a "snapshot" taken too soon.

While the committee found evidence that several other factors influenced the polling errors in New Hampshire, the concluded that "late decisions "may have contributed significantly." They based this conclusion mostly on evidence from two panel-back surveys -- conducted by CBS News and Gallup -- that measured vote preferences for the same respondents at two distinct times. The Gallup follow-up survey was especially helpful, since it recontacted respondents from their final poll for a second interview conducted after the primary.

Although the evidence suggested that a late shift contributed tot he problem, the committee hedged on this point because, as they put it, "we lack the data for proper evaluation." Did more data exist that could shed light on this issue? Absolutely.

First, four pollsters continued to interview on Monday. ARG, Rasmussen Reports, Suffolk University and Zogby collectively interviewed approximately 1,500 New Hampshire voters on Monday, but the publicly released numbers combined those interviews with others conducted on Saturday and Sunday. The final shifts these pollsters reported in their final releases were inconsistent, but none of the four ever released tabulations that broke out results by day-of-the-week, and all four refused to provide respondent level data to the AAPOR committee.

That omission is more than just a missed opportunity. It also leaves open the possibility that at least one pollster -- Zogby -- was less than honest about what his data said about the trend in the closing hours of the New Hampshire campaign. See my post from January 2008 for the complete details, but the last few days of Zogby's tracking numbers simply do not correspond with the his characterization of that data the day after the primary. Full cooperation with the AAPOR committee would have resolved the mystery. Zogby's failure to cooperate should leave us asking more troubling questions.

But it was not just the "outlaw pollsters," to quote David Hill, that failed to share important data with the AAPOR committee. Consider the Marist Poll, produced by the polling institute at New York's Marist College. Marist is not a typical pollster. It's directors, Lee Miringoff and Barbara Carvalho are long-time AAPOR members. More important, Miringoff is a former president of the National Council on Public Polls (NCPP) and and both Miringoff and Carvalho currently serve on its board of trustees. NCPP is a group of media pollsters that has its own, slightly less stringent disclosure guidelines that nonetheless encourage members to "release raw datasets (ASCII, SPSS, CSV format) for any publicly released survey results."

The day after the New Hampshire primary, Marist reported its theories about what went wrong and promised "to re‐contact in the next few days the voters we spoke with over the weekend to glean whatever additional insights we can." Seven weeks later, Miringoff participated in a forum on "What Happened in New Hampshire" sponsored by AAPOR's New York chapter and shared some preliminary findings from the re-contact study. "Our data," he said, "suggest there was some kind of late shift to Hillary Clinton among women."

Given the importance of that finding, the academic affiliation of the Marist Poll, Miringoff's role as a leader in NCPP and that organization's stated commitment to disclosure, you might think that Marist would be first in line to share its raw data with the respected scholars on the AAPOR committee.

You might think that, but you would be wrong.

As of this writing, the Marist Institute has yet to share raw respondent level data for either their final New Hampshire poll or the follow-up study. In fact, the Marist Institute has not yet provided any of the results of the recontact study with Professor Traugott or the AAPOR committee -- not a memo, not a filled-in questionnaire, not a Powerpoint presentation...nothing.

I was surprised by their failure to share raw data, so I emailed Miringoff for comment. His answer:

First, we did provide information on disclosure as required by AAPOR and I spoke, along with Frank Newport, on the NH primary results at a meeting of NYAAPOR. It was a great turnout and provided an opportunity to discuss the data and issues.

Unfortunately, the "information on disclosure" they provided was, again by AAPOR standards, the minimum that any researcher ought to include in any publicly released report. To be fair, Marist had already included much of that "minimal disclosure" information in their original release. According to Nancy Mathiowetz, however, Marist did not respond to her requests -- filling in information missing from the public report such as the order of questions, a description of their weighting procedure and response rate data -- until November 17, 2008. And that transmission said nothing at all about the follow-up study.

Miringoff continued:

Second, we did conduct a post-primary follow-up survey to our original pre-primary poll. We think both these datasets should be analyzed in tandem. We are preparing them to be included at the Roper Center along with all of our pre-primary and pre-election polling from 2008 for anyone to review.

What's the hurry?

I am not sure what is more depressing: That a group of "outlaw pollsters" can flaunt the standards of the profession with little or no fear of recrimination or that a former former president of the NCPP can so blithely dismiss repeated requests from AAPOR's president with little more than a "what me worry" shrug. Does it really require 14 months (and counting) to prepare these data for sharing?

Just after the primary, I let myself hope that the pollsters of 2008 might follow the example of the giants of 1948, put aside the competitive pressures and open their files to scholars. Fortunately, the survey researchers at CBS News, the Field Poll, Gallup, Opinion Dynamics, PPIC, SurveyUSA and the University of New Hampshire (and their respective media partners) did just that. For that we should be grateful. But the fact that only 7 of 21 organizations chose to go beyond minimal disclosure in this case is profoundly disappointing.

The AAPOR Report is a gift for what it tells us about the state of modern pre-election polling in more ways than one. The question now is whether polling consumers can find a way to do something about the sad state of disclosure this report reveals.

Correcting the Correction: I had it right the first time. The CBS News/New York Times partnership conducted their first New Hampshire survey in November 2007, but CBS News was solely responsible for the panel-back study The original version of this post incorrectly identified the CBS News New Hampshire polling as a CBS/New York Times survey. While those organizations are partners for many projects, the New York Times was not involved in the New Hampshire surveys.

Popular in the Community

Close

What's Hot