POLITICS
09/11/2013 06:17 pm ET

HUFFPOLLSTER: Pollsters (Mostly) Get It Right In NYC

AP

The final polls in New York City were mostly right in the end. And now that the entertainment of the primaries are over, the PPP/Nate Silver Twitter war is just in time. This is HuffPollster for Wednesday, September 11, 2013.

NEW YORK CITY POLLS MOSTLY GOT IT RIGHT IN THE END - Despite what amounts to the greatest degree of difficulty in polling due to New York City's legendarily low response rates, the final polling on the mayoral race correctly predicted the order in which the top candidates placed, and which way they were trending. Surveys found Bill de Blasio’s numbers soaring toward the 40 percent threshold, Bill Thompson in a distant -- but rising -- second place, Christine Quinn in third and trailing, and Anthony Weiner, John Liu, Erick Salgado and Sal Albanese lingering in the single digits. In the comptroller’s race, it predicted a tight competition, with Stringer gaining steam. Both were broadly correct reflections of the results.
[Election results, poll chart]

But not everything -- Polls generally understated support for city comptroller John Liu at 4 to 5 percent and trending downward. He won 7 percent of the vote. And despite showing an upward trend for Bill Thompson, the final polls by Quinnipiac, Marist and PPP all showed him losing the African American vote with a much smaller percentage of the vote (25, 25 and 26 percent respectively) than the estimate on the Edison Research exit poll (42 percent, tied with de Blasio). [NYTimes tabulation of exit poll]

Quinnipiac takes a victory lap - The only pollster in the last week to correctly call a Stringer victory, it says in a press release: “A September 9 survey of likely Democratic primary voters by the independent Quinnipiac University showed Public Advocate Bill de Blasio with 39 percent, followed by former Comptroller William Thompson at 25 percent, City Council Speaker Christine Quinn with 18 percent and 8 percent undecided. That poll noted that a few undecided voters could put de Blasio over the 40 percent mark needed to avoid a runoff election.”

Stringer pollster Mark Mellman demurs - "With respect & affection-Stringer was never down 19 pts as per Qpoll MT @ppppolls: Congrats to @QuinnipiacPoll on its pin point NYC polling" [@MarkMellman]

PPP BELATEDLY RELEASES COLORADO RECALL POLL, SPARKS TWITTER DUSTUP - Pollster Tom Jensen published the results of a survey taken last week, explaining he initially held onto the results because he didn’t believe them: “We did a poll last weekend in Colorado Senate District 3 and found that voters intended to recall Angela Giron by a 12 point margin, 54/42. In a district that Barack Obama won by almost 20 points I figured there was no way that could be right and made a rare decision not to release the poll. It turns out we should have had more faith in our numbers [because] she was indeed recalled by 12 points.” [PPP’s results]

AP’s Jennifer Agiesta raised some questions on Twitter - “Your blog post says you ‘made a rare decision’ not to release. Suggests it was intended for release. Not the case?...So you're saying that before you got the results, you had no intention of releasing the poll?” [@JennAgiesta]

Tarrance Group’s Logan Dobson offers a limited defense - “Don't necessarily have a problem w/ a pollster holding back on a result if they think it's way off...but it's a good reminder that any poll that reaches public has been put through that filter….Political bias can absolutely be a reason to spike a poll. But so can thinking your result is off and not wanting to look dumb….Incidentally, PPP's ability to spike a poll is based on their business model; it's so cheap for them to conduct a poll they can just toss it.” [@logandobson]

Jensen posts a longer defense of his rationale - “We are getting an awful lot of abuse about not releasing our poll on the Angela Giron recall before the election...If I'd thought we'd pulled a fast one on the world, I certainly wouldn't have released the poll after the election. But I thought there was a fascinating disconnect between the results of the election- Giron getting recalled by a wide margin- and the opinions of voters in the district about the actual gun legislation that was passed- they broadly supported one of the provisions and were evenly divided on the other. There's a fascinating story there in how the NRA was able to get voters to punish a Senator for supporting a bill voters didn't actually have that much of a problem with, and I thought that was something that belonged out in the public sphere. You might not like the decision we made about how to handle this poll but there was a legitimate reason for it.”
PPP]

Fivethirtyeight’s Nate Silver throws down the gauntlet - “VERY bad and unscientific practice for @ppppolls to suppress a polling result they didn't believe/didn't like….unless a pollster thinks there was literally something buggy about their data collection, they ought to publish all results...I'm especially skeptical when a pollster puts its finger on the scale in a way that matches its partisan views...If you suppress ‘outliers’ you don't like but tolerate those you do, you wind up with a very biased average.” [@fivethirtyeight]

And then, for better or worse...I jumped in. What caught my attention was the claim in the initial post by PPP's Tom Jensen, initially flagged by Jenn Agiesta, that they "made a rare decision not to release the poll" because, "I figured there was no way that could be right." He added later on Twitter that the decision to withhold data was made because, "We were polling a race where there was no public data- found a counter intuitive result- 1/3rd of Dems supporting recall" (emphasis added). [@ppppolls]

My tweet: "In so doing, @ppppolls helps support claims made by others that their accuracy depends on other public polls going first. Puzzling." [@MysteryPollster]

My reference was to a paper by political scientists Joshua Clinton and Steven Rodgers, published in the journal PS, that claims to find evidence that "IVR [automated] polls conducted prior to human polls are significantly poorer predictors of election outcomes than traditional human polls, even after controlling for characteristics of the states, polls, and electoral environment." The paper's conclusions strike me as shaky. The authors note that "12 of the 17 IVR polls conducted before human polls are from the same firm," (they confirmed to me via email that PPP was that firm). In all but one case, the IVR-alone polls were fielded in extremely low turnout Republican caucus or non-binding primary states, including four contests held on February 7, 2012 they say explained "most of the effect." [PS]

Nonetheless, Jensen's admission that he spiked the poll because he didn't trust the results lends some credibility to the Clinton and Rodgers theory. Or so I implied. Jensen, not surprisingly, took strong exception: " Mark you know better than this. We release more polls publicly on stuff that no one else polls than anyone else by far...We had a party crosstab that there was real reason to be skeptical of so we didn't release. [@PPPpolls here and here]

Silver's broadside helped raise a more fundamental debate that subsequently raged on Twitter: Under what conditions is a private pollster is obligated to publicly release their data? And the answer is that they are under no ethical obligation to release anything, although most will choose to release results selectively that help advance the interests of their clients. That bit of cherry-picking helps explain why private polling that gets publicly released typically displays a bias favoring the sponsors, and those of us who report on such data have an obligation to warn our readers of that bias.

Jensen went on to argue that PPP is an "unusual hybrid of public/private pollster, [I] think sometimes people expect us to be more public than reasonable." He retweeted Republican Matthew Dowd, who argued that PPP "has the right to release whatever polls they want. they aren't a public utility." Fair enough. To be clear, however, Jensen implied that the poll was originally intended for public release and not sponsored by a client. The whole episode calls into question just how many polls PPP conducts, not funded by an "private" client, that they choose to spike. [@PPPpolls, @MatthewJDowd]

Even public pollsters would be acting ethically if they held back a poll that they considered methodologically flawed, such as an improperly drawn sample, questions missing or garbled, malfeasance by interviewers or anything else that's "literally buggy about data collection" (as Silver put it). Ethics aside, however, holding back a survey just because you have doubts about the result and then releasing the same data after the election when those doubts have been resolved is highly unusual.

Washington Post’s Aaron Blake helpfully compiled a Storify of the whole contretemps [WaPost]

HUFFPOLLSTER VIA EMAIL! - You can receive this daily update every weekday via email! Just enter your email address in the box on the upper right corner of this page, and click "sign up." That's all there is to it (and you can unsubscribe anytime).

WEDNESDAY'S OUTLIERS' - Links to more news at the intersection of polling, politics and political data:

-American Research Groups looks at the accuracy of New York City polls compared with the preliminary results. [ARG]

-A CNN poll finds GOP would take the blame for a government shutdown. [CNN]

-Tea Party members are increasingly dissatisfied with Republican leadership. [Pew Research]

-The New York Times examines Bill de Blasio’s late ascension in the NYC mayoral race. [NYT]

-Lee Miringoff tries to add a little understanding to the margin of error. [Marist]

Subscribe to the Politics email.
How will Trump’s administration impact you?

CONVERSATIONS