01/15/2009 12:14 pm ET | Updated May 25, 2011

The Exit Poll and Prop 8

Ta-Nehisi Coates posts some helpful reporting this week on the exit poll result showing surprisingly large numbers of African Americans supporting California's Proposition 8, the ballot measure approved by voters this past November made same-sex marriage illegal in that state. Coates did some additional digging into a recent report (PDF) by two academics, Patrick Egan and Kenneth Sherrill, released last week by the National Gay and Lesbian Task Force Policy Institute. This controversy is worth reviewing, even for those less interested in the Prop 8 controversy, because of an important lesson it teachers about the need for all of us to be a bit more skeptical in the way we interpret exit polls, or that matter, any other form of survey.

For those unfamiliar with this particular controversy, it involves the exit poll finding that 70% of African-American voters supported Prop 8, a margin far bigger than among Latinos (53%), whites (49%) or Asians (49%). Since Prop 8 passed by just 4.6 percentage points (slightly less than six hundred thousand votes out of more than 13 million cast), and since the exit poll showed the African American percentage of California's electorate soaring from 6% in 2004 to 10% in 2008, some have argued that African-Americans were responsible for Prop 8's passage. The Associated Press reported within hours of the election that margins measured among California's black and Latino voters, combined with their turning out "in droves" for Obama provided "key support" for the same-sex marriage ban. On November 6, the Los Angeles Times cited the same exit poll numbers in reporting that African Americans "played a crucial role in the outcome."

The Egan/Sherrill report, drawing on multiple pre and post election surveys and analysis of precinct level vote returns, produces considerable evidence to debunk the notion that African Americans were singularly responsible for passage of Prop 8. They that African-American support was "in the range of 57 to 59 percent," and that differences in age, "religiosity" (frequency of church attendance), party identification and political ideology were more significant in driving vote decisions than race or gender. Coates also adds that most of the polling professionals he interviewed with knowledge of California's political demographics were skeptical that African American's made up 10% of the electorate (a DailyKos diarist had come to similar conclusions shortly after the election).

Coates makes two very good points about this particular exit poll result. First, he reminds us that the African-American subgroup for the California exit poll was relatively small -- roughly 224 interviews (assuming that the weighting procedure left the percentage of interviews among black voters unchanged). As such, the random sampling error on the 70% statistic alone would have been roughly +/-6-7% (assuming a 95% confidence level). Another issue, not raised by any of Coates' sources, is that the "clustered" sample used by exit polls is inherently more prone to error when the subgroup of interest is also geographically clustered (as African-Americans, Latinos, Jews and other subgroups tend to be).

Second, Coates identifies the strength of the Egan/Sherrill report, something a professor of mine called "triangulation" (long before Dick Morris gave that term a very different meaning):

Patrick Egan and Ken Sherrill didn't simply do another poll. Well, they did commission DBR to do another poll, but they didn't stop there. They compared DBR's poll to three other polls taken close to and after the election, and the exit poll. Then after that, they used Goodman's regression to analyze census data and precinct returns. Then after that, they used Gary King's EZI software in much the same way. In other words, instead of employing a single method (an exit poll) to analyze the Prop 8 vote, they used several.

Yes, we have good reasons to question the results from exit polls beyond ordinary sampling error. However, the biggest challenge we have in interpreting the final exit poll results (not those leaked just before the polls closed but the official tabulations weighted back to match the actual results) is not about any flaw, real or imagined, in the methodology, but rather that we have only one network exit poll to consider.

Think about it. We know (or should know) that that all surveys can be inherently flukey. The 95% confidence level we use to calculate the margin of error means that one poll in twenty (or one subgroup result in twenty, or one question in twenty on a single survey) will produce a result outside the error margin by chance alone. And we should also know that the margin of error tells us nothing about the potential for other kinds of variation or error (due to low response rates, poor sample coverage or question wording). When we see the occasional "outlier" crop up in the form of a national telephone survey result, we tend to recognize it as such immediately because we have so many other polls available to compare it to. But when it comes to exit polls, we usually have only one, so we are more likely to accept the results in an unquestioning way.

An exit poll is just a survey, with the same potential for error as any other. That is the main lesson of this controversy.