THE BLOG
10/20/2009 01:32 pm ET Updated May 25, 2011

One More Thought on Yeager, Krosnick, et. al.

My column on "opt-in" Internet panel surveys yesterday ended a bit abruptly, leaving at least one reader confused about my meaning. So let me "revise and extend" as we say here in the nation's capital, beginning with the column's two final paragraphs:

[Prof. Jon] Krosnick wants to be clear that he sees no evidence yet that "opt-in sample surveys are as accurate as or more accurate than probability sample surveys," and given their lack of foundation in probability sampling, he is not optimistic that they ever will be. "It's essential for us to be honest about what our data are and what they are not," he added.

True enough, but I would add one thought. Our honesty should extend to the limitations of probability samples as well. In the Krosnick-Yeager study, for example, despite very sophisticated weighting, that very expensive, very rigorous telephone survey still produced errors outside the margin of error on 4 of 13 benchmarks. By random chance alone, it should have produced no more than 1.

To be clear, I did not mean to imply that David Yeager, Krosnick or any of their co-authors** were hiding anything about the margin of error. The statistics they produced on the percentage of "significant differences from benchmark" appear in Table 2 of their report and were cited in Gary Langer's initial post on the study.

Rather, the point I am trying to make is that some have reacted to this study in ways that feed a reflexive, binary, good vs. evil view of the differences between probability samples and those from opt-in internet panels. Many hear the praise of standard probability sample as "valid and reliable" or "very consistently accurate" as implying infallibility. These statements also lead many to believe that any survey conducted via the Internet is cheap "crap" that any "reputable pollster would stay away from."

My point: All surveys involve some sort of trade-off between cost and accuracy. Studies like the one we've been discussing give us some tangible sense of the differences in accuracy when using opt-in samples.

The point about the potential fallibility of conventional probability sampling was made by the same Jon Krosnick (and co-author Morris Fiorina) in a 2004 paper that I linked to in last week's column:

To be sure, though, the +/- x percent margins of error that accompany many widely- publicized survey results are misleading. This is true partly because these margins of error describe only sampling error, whereas we know many other sorts of error are present in survey data, including errors caused by interviewers and respondents when reporting and recording answers to questions. But in addition, at least in theory, these margins of error underestimate sampling error itself. Such estimates are accurate only if the respondents ultimately interviewed are a random draw from the original sample, and the less than perfect response rates that typify public opinion polls certainly come with non-response bias in terms of demographics. It is not an exaggeration to say that conventional public opinion surveying today begins with probability samples, then loses successive portions of the sample but hopes that in the end, the losses will cancel out or be corrected by statistical weighting using demographics, so the sample that remains will be a reasonable approximation of the population.

What is most encouraging about the Yeager, Krosnick, et. al. study is that so many in the survey world are looking to the results as a useful way to judge the true accuracy of different kinds of surveys. There is much disagreement about what to make of the differences reported, but hidden silver lining is that nearly everyone agrees that this sort of "results based analysis" has merit. That's progress.

**The original version of the column may have left the impression that David Yeager and Jon Krosnick wrote the paper on their own, which did a disservice to co-authors LinChiat Chang, Harold S. Javitz, Matthew S. Levendusky, Alberto Simpser and Rui Wang. Apologies for the oversight.