POLITICS

HUFFPOLLSTER: When Is It OK To Weight Polls By Past Vote?

09/13/2013 05:32 pm ET
AP

We take a closer look at the practice of weighting by past vote, as used by several pollsters. Siena College is taking a closer look at a poll that missed by a mile in the Rochester mayor's race. And tonight we atone for all things Twitter. This is HuffPollster for Friday, September 13, 2013.

SHOULD A SURVEY WEIGHT ON PAST VOTING? - Yesterday's much discussed article by The New Republic's Nate Cohn revealed that in their 2012 surveys, the Democratic firm Public Policy Polling asked a question about who respondents voted for in 2008 and took that result into account when weighting by demographics. Their goal was to bring people’s answers into line with the actual vote totals from 2008 by adjusting the age and racial composition of the sample. "As far as I’m aware," Cohn argued, "no other pollster uses the results of the last election the way PPP does," adding in a footnote that those who do "consider the past election" don't simply weight to the result, like PPP or use it to influence their weighting of racial composition. "Pollsters have stayed away from this approach for good reasons," he adds. "For one, who knows whether the electorate will end up including Obama and McCain voters in the same proportion as the last election? But even more important, the results are notoriously unreliable: People lie about their vote, or just forget. Simply targeting the past election result shouldn’t work." [New Republic]

It is true that when asked by a live interviewer, retrospective vote questions tend to exaggerate both the number of people who say they have voted and the margin for the winning candidate. Political scientists have consistently observed this over-reporting in live interviewer surveys dating back to the 1940s and confirmed it with vote validation studies that check public records to see if individual respondents actually voted. The reason for the pattern is something social scientists call "social discomfort." Some people are so embarrassed by not voting, or by having voted for the losing candidate, that they cannot admit one or the other to a stranger on the telephone. [MysteryPollster]

PPP's Director Tom Jensen is not deterred by the over-reporting on live interviewer surveys. "I think the evidence that suggests polls overstate the margin of the last election is probably more a live caller than IVR [automated poll] phenomenon," he wrote in an email reply to Cohn posted on Thursday. "You're not going to be embarrassed to tell us you voted for someone who lost." In 2012, other pollsters who used "self administered" surveys made the same assumption. [PPP]

Rand's American Life Panel - The RAND organization maintains an internet panel of nearly 5,000 recruited through various means in order to get what it describes as representative sampling of Americans. Its election survey was unique in that it asked every member of the panel to respond once a week for four months before the election, and used a probabilistic approach to measuring vote preferences. In addition to weighting the completed interviews by demographics each week, it also weighted respondents’ self-reports of how they voted in 2008 so that the sample matched the actual result: "Based on the premise that the best predictor of future voting behavior is past voting behavior, we also reweight each daily sample separately such that its voting behavior in 2008 matches known population voting behavior in 2008." The final RAND survey showed Obama defeating Romney by a 3.3 point margin (49.5 to 46.2). Obama's final victory margin was 3.9 percent (51.1 to 47.2). [RAND, HuffPost election results]

Democratic Campaign Committee (DCCC) - When the story broke on Thursday, Aaron Strauss, the former director of targeting and data at the DCCC, said on twitter that he "always suspected @ppppolls asked '08 ballot (as we did at @DCCC)," and congratulated Cohn "for uncovering the methodology," which he subsequently endorsed. Reached via email, Strauss confirmed to HuffPollster that internal automated telephone surveys conducted by the DCCC using registered voter lists included weighting 2008 vote preference to the actual result, along with other demographic characteristics. "I am very proud of our IVR record at the DCCC," Strauss said. "We conducted many more polls than were released publicly; a post-election analysis revealed that we were slightly more accurate than our live-interview survey partners. (The difference wasn't statistically significant.)" [@Aaron_Strauss, DCCC survey via Scribd]

Pew Research Center - Finally, there's the Pew Research Center. Throughout the fall campaign, the pollsters at the Pew Research Center grew concerned about the instability they were seeing in their polls. Their results among registered voters showed a massive swing from a 9-point Obama lead just before the first debate to a tied race just after, swings that paralleled similar shifts in party identification. The pollsters worried that something they call "variable differential non-response" might be at work. "In other words," Pew Research pollster Michael Dimock explained at the annual conference of the American Association for Public Opinion Research in May, "you have different groups of people who are more or less willing to participate in polls, but who is more or less willing to participate in the poll might vary depending on the election context."

Then just before their final poll was set to field, Hurricane Sandy slammed into the East Coast of the United States. Their call centers, operated by Princeton Survey Research International, were operated from areas affected by the storm, and as a result, their calling on the first night of their final poll was affected. "That left us in the later nights of the poll playing catch up," Dimock said, trying to make sure they dialed all of the sampled numbers appropriately that should have been called but "got left out in the first night of the poll." He explained that as they awaited the final data, "we had a lot of reasons to be concerned we would be able to get a good representative sample of the nation."

When they applied their standard demographic weighting to their final complete sample of adults, the results showed another massive shift, this time to an 11 percentage point lead among all registered voters. Though there were arguments about why that movement might reflect something real, "the size and scope of the change" was a source of concern.

The Pew pollsters noticed that party identification had once again shifted in the Democrats’ favor, but they did not want to weight by an attitude, something more likely to shift from survey to survey. "We found another indicator in there that was out of line which was people's self reported vote in 2008," Dimock explained. "Now self reported vote from four years ago is tricky, but it was pretty well out of range, so we ended up doing we hadn't done before which was to [weight] that poll back to that variable, but that's not something we like to do. " They added a weight for the 2008 vote, but to match the average of results on their prior polls in 2012. "We obviously didn't weight it all the way back to the 2008 outcome. But it put us in a position that we don't like to be in which is not letting the data speak to us, but sort of imposing some kind of limit on the range of the data."

The weighted results gave Obama a 7 percentage point lead among registered voters and a 3 point advantage (48 to 45 percent) among those deemed most likely to vote -- just one point off Obama's ultimate 4 point margin. The tweak to Pew's standard methods did not immediately appear in their standard demographic disclosure. That information was added later, just before a presentation by Dimock at AAPOR conference in May 2013. [Pew Research report, methodological statement]

The ad hoc change that Pew Research made to its procedures bears little resemblance to PPP's random interview deleting and weighting methods as described by Cohn. But the episode is a reminder that in the often messy real world, even one of the most respected names in survey research is capable of making an ad hoc change to its methods.

AMERICANS CONFLICTED ON DEBT CEILING - HuffPost: “As the U.S. draws closer to hitting its borrowing limit, polls released this week show that many Americans oppose raising the debt ceiling, despite worries about the consequences of failing to do so. In an NBC/Wall Street Journal poll released Friday, 44 percent of respondents said they are against raising the debt ceiling, while 22 percent said it should be raised so the U.S. avoids ‘going into bankruptcy and defaulting on its obligations.’ The remaining third said they are unsure….Despite the lack of support for increasing the debt limit, a CNN/ORC poll released earlier this week found that 62 percent of Americans said failing to do so would cause a ‘crisis’ or ‘major problems.’” [HuffPost]

Requisite grain of salt - Jonathan Bernstein: “Since my position is that politicians should probably ignore all deficit/debt polls in general, I obviously agree that they should ignore debt limit polls. While I'm here: I'd also say that they should mostly ignore all ‘who would you blame’ speculative polls about a government shutdown, as well. At best, it's going to give you just a bit more than basic party ID questions, but mostly, asking people to predict how they'll react to possible future political stories is just a waste of time. [Plain Blog About Politics]

NOT EVERY POLLSTER GOT IT RIGHT ON TUESDAY - While New York City mayoral polls were generally on the mark, Siena College’s survey of Rochester’s mayoral race was more than a little off. Casey Seiler: “The Siena Research Institute had some statistical explaining to do following Tuesday night's decisive Democratic primary defeat of Rochester Mayor Thomas Richards by City Council President Lovely Warren. In a poll released Sunday, Siena gave Richards a whopping 36-point lead over Warren, 63 to 27 percent. The poll of 503 likely voters — co-sponsored by YNN, the Democrat & Chronicle, WHAM TV and WXXI Public Broadcasting — had a margin of error of 4.4 percent. Instead, Warren beat Richards 58 to 41 percent, clearing the way for her to likely become Rochester's first female mayor after November's general election. Siena was far more accurate in its assessments of the Democratic mayoral primaries in Albany and Buffalo, and several other local races around the state…..’I'm not willing to go with the 'Perfect Storm' explanation,’ [polling director Don] Levy said, alluding to the idea that a number of small factors added up to one big error. Instead, ‘We just pulled the wrong marbles out of the bowl.’" [Times Union]

AAPOR’S RECOMMENDS ENCOURAGING WIDER USE OF PUBLIC OPINION DATA: “The Task Force recognizes that AAPOR has traditionally focused more on the process or methods of public opinion research than it has on the ways in which the resulting research data are used or should be used….AAPOR as an organization may not be in a position to advocate either the degree to which public opinion research should be used by policy makers, or exactly how it should be used. AAPOR can, however, certainly be in a position to advocate that public opinion is potentially an important part of the way in which a democratic society functions, and that public opinion data should be made available in ways that it can be used as appropriate.” [AAPOR]

HUFFPOLLSTER VIA EMAIL! - You can receive this daily update every weekday via email! Just enter your email address in the box on the upper right corner of this page, and click "sign up." That's all there is to it (and you can unsubscribe anytime).

FRIDAY'S OUTLIERS' - Links to more news at the intersection of polling, politics and political data:

-Americans are split as to whether Russia's plan to control Syria's chemical weapons will succeed. [Gallup]

-Fewer Americans than ever trust government to handle problems. [Gallup]

-PPP lists 15 races it got right going it alone. [PPP]

-Charlie Cook reviews the threat to Democrats from a weak economy, Syria and declining Obama approval. [National Journal]

-Mark Mellman predicts public opinion will shift on Syria. [The Hill]

-Elizabeth Wilner argues that in politics, big data needs to keep relying on intuitive expertise. [Cook Political]

-David Hill looks at Vladimir Putin’s approval ratings. [The Hill]

-A dialect survey will tell you what you talk like. [Spark via @LoganDobson]

-You know it’s Yom Kippur when Google searches for “sundown time” suddenly spike. [BuzzFeed]

Suggest a correction
Comments

CONVERSATIONS