Does IVR Explain the Difference?

Does IVR Explain the Difference?
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

I am pondering two somewhat related questions this afternoon, but both have to do with national surveys conducted using an automated ("robo") methodology (or more formally, IVR or interactive-voice-response) to measure Barack Obama's job approval rating. One is the ongoing Rasmussen Reports daily tracking, the other is the just-released-today national survey by Public Policy Polling (PPP).

Both surveys are certainly producing lower job approval scores for President Obama than those from other pollsters. The difference for Rasmussen is painfully obvious when you look at our job approval chart, magnified by the sheer number of data points they contribute to the chart. Look at the chart and you can see two bands of red "disapproval" points with the trend line falling in between. Point to and click on any of the higher scores and you will see that virtually all come from Rasmussen. Similarly point to and click on a Rasmussen "black" approval point and you will see that virtually all of their releases fall somewhere below the line.

The most recent Rasmussen Reports job rating for Obama s 55% approve, 44% disapprove. Use the filter tool to drop Rasmussen from the trend, and the current trend estimate (based on all other polls) is, with rounding, 61% approve, 30% disapprove. Leave Rasmussen in and the estimate splits the difference. The latest PPP survey produces a result very similar to Rasmussen: 53% approve of Obama's job performance and 41% disapprove.

I know that Charles Franklin is working on a post that will discuss the impact of the Rasmussen numbers of the job approval chart, so I am going to defer to him on that aspect of this discussion. (Update: Franklin's post is up here).

But since some will find it very tempting to jump to the conclusion that the IVR mode explains the difference -- as PPP's Tom Jensen did back in February -- I want to take a step back and consider some of the important ways these surveys differ from other polls (and with each other) that have little or nothing to do with IVR.

First consider the Rasmussen tracking: Like many other national polls it begins with what amounts to a random digit dial sample -- randomly generated telephone numbers that should theoretically sample from all working landline telephones. However, unlike many of the national surveys, it does not include cell phone numbers, it screens to select "likely voters" rather than adults, and Rasmussen weights by party identification (using a three-month rolling average of their own results weighted demographically, but not by party). Rasmussen also asks a different version of the job approval rating. Other pollsters typically ask respondents to say if they "approve" or "disapprove" Rasmussen asks if them to choose from four categories, "strongly approve, somewhat approve, somewhat disapprove or strongly disapprove."

And Rasmussen uses an IVR methodology.

Now consider PPP: Unlike Rasmussen, they draw a random sample from a national list of register voters compiled by Aristotle International (which gathers registered voter lists from Secretaries of State in each of the 50 states plus the District of Columbia and attempts to match each voter with a listed telephone number in the many states where that information is not provided by the state. As far as I know, Aristotle has not published the percentage of registered voters on that list for which they lack a working telephone number, but it is likely a significant percentage. The critical issue is that the population covered by PPP is going to be different than that covered by other pollsters including Rasmussen.

So any coverage problems aside, PPP still samples a different population (registered voters) than most other public polls. Like most other pollsters, but unlike Rasmussen, they do not weight by party identification. Finally, the also ask a job approval question that is slightly different from most other pollsters.

Consider these versions:

  • Gallup (and most others): "Do you approve or disapprove of the way Barack Obama is handling his job as president?"
  • Rasmussen: "How would you rate the job Barack Obama has been doing as President... do you strongly approve, somewhat approve, somewhat disapprove, or strongly disapprove of the job he's been doing?"
  • PPP: "Do you approve or disapprove of Barack Obama's job performance?"

Note the very subtle difference: Others ask about how Obama is "handling his job" or about the job he "has been doing as president." PPP asks about his "job performance." MIght some respondents might hear "job performance" as a question about Obama's performance on the issue of jobs? That hypothesis may seem far fetched (and it probably is), but a note to PPP: It would be very easy to test with a split-form experiment.

Oh yes, in addition to all of the above, PPP uses an IVR methodology.

As should be obvious from this discussion, not all IVR methods are created equal. I happened to be at a meeting this morning with Jay Leve of SurveyUSA, one of the original IVR pollsters. As he pointed out, "there is as much variability among the IVR practitioners as there would be among the live telephone operators" on methodology, including some of the other more arcane aspects of methodology that I haven't referenced.

So the main point: While tempting, we cannot easily attribute to IVR all of the apprent difference to Obama's job rating as measured by Rasmussen and PPP on the one hand, and the rest of the pollsters on the other. There are simply too many variables to single out just one critical. The lack of a live interviewer may well play a role, but the differences in the populations surveyed, the sample frames and the text of the questions asked or some other aspect of methodology may be just as important.

More generally, just because a pollster produces a large house effect in the way they measure something, especially in something relatively abstract like job approval, it does not follow automatically that their result is either "wrong" or "biased" (a conclusion some readers have reached and communicated to me via email), only different. Observing a consistent difference between pollsters is easy. Explaining that difference is, unfortunately, often quite hard.

Popular in the Community

Close

What's Hot