THE BLOG
09/14/2009 12:56 pm ET Updated May 25, 2011

In Defense of Automated Surveys

My National Journal column for the week, now posted, defends automated, recorded voice polling from what is becoming a common line of attack: without a live interviewer anyone, regardless of age, might participate in the survey. Please click through for the details.

Since I typically file my NationalJournal.com columns on Friday afternoon to appear on Monday morning, I get a chance to mull them over all weekend before posting these quick updates on Pollster. This weekend, I realized I that one conclusion could have used more emphasis: My bottom-line on automated polls is that they have established a strong record in measuring campaign horse-race results in pre-election polls. Over at least the last four election cycles, they have been as accurate as live election polls at the end of the campaign, and their horse race results generally track well with live interviewer surveys. So I think that it is wrong to condemn automated polls simply because they use a recorded voice rather than live interviewers.

That said, we need to keep in mind that the mode of interview is just one aspect of a methodology. If you look at the best known automated surveys, you will see a lot of variation in how they draw their samples, how persistent they are in attempting to call-back households where no one answers on the first call, how many interviews they conduct, how they identify likely voters, how they weight the data and, finally, in the questions they ask. All of those factors might make any given automated poll more or less reliable or accurate than any given live interviewer poll.

Also, while automated surveys have proven themselves in one particular application -- measuring campaign horse race numbers late in the campaign -- we need to be careful about overlooking potential shortcomings for other kinds of research. I would certainly not recommend an automated interview for any general population study that wants to ask more than four or five substantive questions or that involves open-ended questions that allows respondents to answer in their own words.

On a slightly different subject, the column also highlights one statistic that Charles Franklin computed:

[T]he national job approval data does not support the assertion that automated polls are more "erratic." My Pollster.com partner Charles Franklin checked and found that despite identically sized three-day samples, the Rasmussen daily tracking poll is less variable than Gallup (showing standard deviations of 1.8 and 2.4, respectively), probably because Rasmussen weights its results by party identification.

Charles also sent along a chart, which is based on deviations from the trend line for Obama's job approval rating since taking office in January.

2009-09-14_gallup_vs_rasmussen.png

The tails of the Gallup curve are slightly wider than the Rasmussen curve. The point is not that Rasmussen is better or worse than Gallup, again only that the presidential approval is slightly less variable as Rasmussen, probably because they weight by party.

You can certainly make a case that rolling average daily tracking, whether automated or traditional, includes a lot of random variation, and that those seeking a narrative can find whatever story they want in the meaningless daily bumps. On that score, I generally agree with the advice offered by the First Read piece I quoted in the column: Beware -- lots of daily approval polls with widely differing methods "lets some folks cherry-pick what they want."

Finally, one subject that deserves more attention than the two brief paragraphs I gave it is what we lose when a live interviewer does not gather the data. A few weeks ago, a survey researcher named Colleen Porter shared a defense of quality interviewing in the form of an anecdote on the member-only listserv of the American Association for Public Opinion Research (AAPOR). She gave me permission to share the story here, in which she describes monitoring an interview being conducted on behalf of her client:

The interviewer is amazing. Her surname is Hispanic--is she this good in Spanish, too? Of course they put their best interviewers on the first night; I would, too, when I was at a survey lab.

When she asks about the location of an event, the respondent commences a story about the many times it has happened. The interviewer repeats the question exactly as worded, with emphasis on "LAST TIME," but a tone of complete patience as if reading a new question. The respondent focuses, and answers promptly.

That is exactly how it is supposed to work. Score! As the respected client, I am off in a room alone, and there is no one to give a high five. I punch the air. I love to hear good interviewing.

Update: Brenden Nyhan emails to pass along a 2006 journal article by respected political scientist Gary Jacobsen (requires a subscription to view the published article, but you can access an earlier conference version of the paper here, in pdf format). Jacobsen's paper is based in the 50-state job approval surveys that automated pollster SurveyUSA conducted during 2005 and early 2006. In the article's appendix, he describes how he "examined the data carefully for internal and external consistency as well as intuitive plausibility" and found that "they passed all of the tests very satisfactorily." His conclusion:

In sum, I found no reason to believe that the quality and accuracy of the aggregate data produced by SurveyUSA's automated telephone methodology is in any way inferior to that produced by other telephone surveys, and I thus have no qualms about using the data for scientific research on aggregate state-level political behavior.