Over the past several years, pollsters have been working to understand the best approach to conducting survey research in a world where no single technology allows them to reach the entire public. Most reputable polling organizations have adapted their national telephone polls to call both landlines and cell phones in order to deal with the growing cell-phone-only population. Yet, these pollsters still must deal with the fact that response rates for telephone polls are generally under 25%, and adding cell phones to their samples have greatly increased the cost of polling.
At the same time, many firms have been developing technology with the aim of producing reliable and accurate opt-in Internet surveys. These surveys can generally be produced for half the cost of a telephone poll, but they diverge from traditional approaches to survey research because they do not rely on a probability sample. The proliferation of opt-in Internet surveys has generated some controversy within the survey research community (for example, here, here, here, and here). In 2010, the American Association of Public Opinion Researchers (AAPOR) issued a report warning against using opt-in Internet surveys to estimate population values, but also noting that significant evidence on this question was lacking. Indeed, many of the studies that AAPOR relied on when reaching their conclusions were based on outdated data.
About the same time that AAPOR was releasing its report, Stephen Ansolabehere and I were in the field with a study to compare how three different survey modes fared when administering the same questionnaire. The modes we examined were a combined landline/cell telephone sample, a mail survey, and an opt-in Internet survey. We contracted with YouGov/Polimetrix to conduct the study. While we will be presenting the findings from our study on Friday at the annual meeting of AAPOR, our report can be found here.
Overall, we found few, if any, differences between the opt-in Internet survey and the telephone poll. Specifically:
1) For measures that we could validate with government data, both the telephone poll and the opt-in Internet survey produced an average error that was nearly identical in size (about 5 percentage points, on average).
2) For political measures that we could not validate, the differences between the phone and Internet survey were generally small (with a few exceptions). For example, the average difference between estimates of political attitudes and opinions generated from each survey was about 5 percentage points. Furthermore, neither survey was consistently more liberal or conservative on these measures.
3) The correlational structure of the data was not significantly different across the phone and Internet surveys.
Thus, as we conclude in the report:
"Overall, our findings indicate that an opt-in Internet survey produced by a respected firm can produce results that are as accurate as those generated by a quality telephone poll and that these modes will produce few, if any, differences in the types of conclusions researchers and practitioners will draw in the realm of American public opinion."