Here is a quick review of a few items of interest I neglected to link to in the aftermath of this weeks' Junior Tuesday primaries.
First, SurveyUSA has posted report cards comparing pollster performances in four contests: Ohio Democrats, Ohio Republicans, Texas Democrats and Texas Republicans. One suggestion for this already helpful feature would be some indication of whether the results for each pollster fall within the range of random sampling error of the actual result. In other words, in theory, even when polls are as right as they can be about an election result they are still subject to random sampling error. As such, a poll that is "right" should capture the actual result within its "margin of error." If all polls are "right" then the ranking of best to worst is a matter of chance.
Yes, when we compile these scores over many different elections, those random factors should cancel out, but when focusing on an individual race, it would be useful to see, at a glance, which polls captured reality within random sampling error and which did not.
Second, here's an example of the lengths pollsters will go to in trumpeting their successes. The Boston Herald's Marie Szaniszlo sent a bouquet to Suffolk University pollster David Paleologos in the form of a short piece puffing his "knack for calling races." The evidence?: A poll in Ohio that "came decidedly closer to the mark" than a survey by "polling giant Zogby," and a New Hampshire poll that showed "Obama winning by 5 compared with Zogby, which showed him leading by 13."
Adam Lewis of the Boston Phoenix responded with a blog entry calling the piece "a bit of a debacle," pointing out that Clinton, not Obama, won the New Hampshire primary, and noting several other instances this spring (the New Hampshire Republican primary and the Massachusetts and California Democratic primaries) in which the Suffolk poll had been far off the mark compared to other pollsters. His final point:
In her lede, Szaniszlo sets up a David-and-Goliath narrative, with "polling giant Zogby" showing Clinton and Obama tied just before Ohio's Democratic primary and "a small polling center based at Suffolk University" putting Clinton ahead 52-40. Clinton won by ten points. Good for Paleologos, but good for a few other pollsters, too, all of whom go unmentioned.
True, but one more thing to consider: That Ohio poll by Suffolk also forecast a Democratic electorate that was 8% African American and 38% age 65 or older. The reality, according to the exit poll, was 18% African Americana and 14% 65+. None of the other pollsters -- save for the Columbus Dispatch mail-in poll sample that interviewed only previous primary voters -- came any where close to that demographic mix.
Finally, Wall Street Journal Numbers Guy Carl Bialik blogged on Tuesday about the challenges we all face telling good polls from bad focusing on the decision by our friends at RealClearPolitics to drop polls from the American Research Group from their averages. Here is a quote from our own Charles Franklin summing up our desire to report all polls that at least claim to provide a representative sampling of "likely voters:"
“Lots of pollsters have shown volatility, not just ARG.” Prof. Franklin added, “The inclusion or exclusion of a pollster runs the perils of cherry-picking polls, something we’ve tried not to do.”
But because of another difference between Real Clear Politics and Pollster, American Research’s numbers aren’t having much of an impact; the two poll aggregators basically agree that Sen. Clinton is ahead by six or seven points in Ohio, and by two points in Texas. That’s because Pollster’s method “has always discounted the effects of outliers — the more dramatically out of line a poll is, the less weight it gets,” Prof. Franklin said.
Those looking for more details on our method might want to review this post Charles did back in August.
Follow Mark Blumenthal on Twitter: www.twitter.com/MysteryPollster