More on yesterday's new Iowa poll from ABC News and the Washington Post. Stu Rothenberg has a scathing review of the coverage on Political Wire:
For years, Independent political analysts have been warning about reporters' tendencies to compare polls conducted by different polling firms, to over-interpret small changes in poll results and to treat the results of the most recent survey as if they are etched into stone. And yet that's what the two networks seemed to do.
It's worth noting, though nobody did, that the July ABC News/Washington Post survey was dramatically different than other surveys taken at the time. This does not mean that the July ABC News survey was wrong or that the current one is incorrect. It is a reminder, however, that it's better to be cautious about reading too much into this, or any, poll -- even if you are paying for the survey.
The last Post/ABC poll did show Obama with a greater percentage of the vote than other surveys done at the same time. However, and I certainly agree with Rothenberg's general caution about comparing polls from different organizations too closely against each other. I'll have more on this when I finish (finally) the review of reports from the disclosure project (before Thanksgiving--I promise!). But the bottom line is that we have in Iowa almost as many different methodologies and conceptions of the potential electorate as we do polls. Those differences in method make for considerable variation in the results. In a sense, they're all outliers.
The best way to consider what "the polls" say about the Democratic contest is to look at our Iowa chart, though I would recommend focusing as much on the points (representing results from individual surveys) as the trend lines. Consider this screen grab from the 2007-only chart, which shows the Iowa results since late August for Clinton (purple), Obama (yellow) and Edwards (red). The trend lines draw on earlier data not seen in the snippet, but if you focus on the last month it is hard to see much of a trend from all the seemingly random noise.
By and large, the Clinton results have been slightly (but not consistently) better than the Obama results with Edwards generally trailing the two. Some of the differences stem from random noise, some from systematic differences in method. Which poll has been most "right" in recent weeks? We may never know. The best characterization, given the overlap in the ranges for each candidate, is the one that The Washington Post put on their own results: "The top three Democratic presidential contenders remain locked in a close battle in Iowa."
And speaking of methodology, ABC News polling director Gary Langer blogs again this week about the procedures they used to conduct this poll on his . His comments are worth reading along with Rothenberg's:
I blogged in August, at the time of our last Iowa poll, about our methodology there, and we followed the same random digit-dialing procedures this time. Again there's a lot of winnowing involved in getting down to likely voters: to get 500 likely Democratic caucus-goers we had to interview more than 4,800 adults in Iowa. That's a lot of calls.
Sampling methodology is a critical point of differentiation among surveys. Another difference is in the number of undecideds -- just 3 percent in our survey, vs. anywhere from 10 to 16 percent in other recently released Iowa polls.