THE BLOG

Update: How Accurate Were the Benchmarks?

12/17/2009 05:25 pm ET | Updated May 25, 2011

Regular readers will recall that after ABC News polling director Gary Langer posted a new paper on a study conducted by a team of Stanford University researchers that found surveys conducted using op-in internet panels, we subsequently ran two guest posts by pollsters whose companies produce such surveys. One was from Douglas Rivers, the president and CEO of YouGov/Polimetrix (the company that is also the principal sponsor of Pollster.com).

A second came from Humphrey Taylor, chairman of the Harris Poll at Harris Interactive. Taylor's response argued that "social desirability bias" -- the notion respondents might not be comfortable accurately reporting on sensitive behaviors such as smoking and drinking -- might have caused errors in both the live interviewer telephone surveys and the in-person government surveys used as benchmarks in the paper authored by David Yeager, Jon Krosnick and their colleagues.

Today, Langer shares a response by Yeager and Krosnick to Taylor's critique that provides a long and detailed argument that the benchmark measures used in their analysis were not contaminated by social desirability bias. It is "perhaps not," as Langer notes, "the most casual reader's cup of Java," but is certainly worth your time if you read Taylor's guest post here in October.

As just about everyone involved seems to agree, these are serious issues for the survey research profession and worthy of this sort of high level argument. For those interested in a more basic review of the issues raised by the initial Yeager, et. a. paper, I devoted two columns to the subject in October.