What Could Go Wrong?

What Could Go Wrong?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Every morning for the last month, I have posted updates looking at poll trends. When poll methodology is standardized, polls should be good at helping discern trends, whether we are making "apples-to-apples" across among polls by a single pollster or using the regression trend lines we plot on charts. But today, more than any day of the last two years, we are focused intently on the level of support that each poll estimates for each candidate.

With that focus in mind, I have been using my NaitonalJournal.com column this month to focus on those things that might throw off polls as a "point estimate" of the outcome of the election. I have been asking, in effect, what could go wrong. This week's column attempts to wrap it all up, and consider more potential issue -- the nightmare scenario for pollsters -- that non-response bias might throw off the estimates.

Given the intense interest, I want to blog my own first few paragraphs:

Will the growing number of voters reachable only by cell phone make polls less accurate? If they do, it will be in Obama's favor in polls that are not interviewing by cell phone. National pollsters, such as the Pew Research Center and Gallup, have reported a slight 1-2 point increase in Obama's margin when they include interviews conducted by cell phone.

The inclusion of cell phone interviews may not be the only explanation, but our trend estimates on Pollster.com show Obama leading by a wider margin on national surveys that interview "cell phone only" voters via cell phone (+9.2 as of this writing) than those that do not (+5.4).

Are likely voter models missing a surge of new voters? Probably not, since most surveys use screens or models that will capture new registrants or newly energized voters if they express strong interest and enthusiasm in voting. Even then, Obama is leading by wide margins even on the more traditionally restrictive likely voter models (such as those used by Gallup, Newsweek and the Pew Research Center) that include measures of past voting in their models.

What about the so-called "Bradley Effect?" Will the undecided vote break decisively to McCain, as it did for many white Republicans running against African-American Democrats in the 1980s and early 1990s? Charles Franklin and I looked hard for current evidence of either a race-of-interviewer effect (which was present in some of those races 20 years ago) or a hidden McCain vote among currently undecided voters and found none. The final survey by the Pew Research Center did a similar sort of analysis and found a slight "break" of the remaining undecided vote to McCain, but not enough to make much dent in Obama's lead: It allocated 4 of the 7 undecided percentage points to McCain, 3 to Obama.

We do see some hints of a possible break of undecided voters to McCain in a few battlegrounds. In Pennsylvania, most of the recent movement has represented a shift from undecided to McCain. In Ohio, we notice consistently closer margins on automated telephone surveys than on surveys conducted with a live interviewer, and again the difference looks like a shift to McCain from undecided. Still, Obama is at or above 50 percent in both states and could still carry both even with a decisive "break" of undecideds to McCain

The column goes on to consider the theoretical possibility of non-response bias that would occur if those that pollsters cannot interview -- because they hang up or are unavailable when called -- have different political views than those who are interviewed. The Pew Researcher's Andrew Kohut has often speculated that non-response bias may have been at least partly responsible for some of the Bradley Effect seen in the late 1980s and early 1990s. He shares his thoughts about the potential for such an effect this week. I hope you'll read it all.

Bottom line: We may see small effects from any of these, but so far at least the potential problems are likely to be offsetting or to increase Obama's lead, not reduce it.

One more thing: In the column, I say that the most rigorous national surveys struggle to achieve response rates over 30 percent. We know this mostly from a now five year old study [PDF] by three academic survey methodologists, Jon Krosnick, Allyson Holbrook and Alison Pfent. In a paper presented at the 2003 AAPOR Conference, they analyzed response rates from 20 national surveys contributed by major news media pollsters. They found response rates (based using AAPOR's Response Rate 3 formula) that ranged from 5% to 39% with an average of 22% (see slides 8-9).

What do response rates look like for the surveys we are looking at now? Good luck finding an answer. The only pollster I have seen include an AAPOR response rate is the Boston Globe/University of New Hampshire Survey Center partnership. They reported an AAPOR #4 response rate of 32% on a recent New Hampshire polls. Would it be so hard for other pollsters to do the same?

Popular in the Community

Close

What's Hot