05/31/2007 10:33 pm ET Updated May 25, 2011

NYT: Online Polls and Traditional Pollsters

Let's start, once again, with the unavoidable conflict:
is owned and sponsored by Polimetrix, a company that conducts online surveys. We
are, however, walled off in a way that allows us the editorial freedom to write
whatever we choose to write (about online surveys and anything else), while
also keeping us ignorant of the work that Polimetrix does for its clients.

Case in point is the must-read
on online surveys that appears in today's New York Times. I heard about it not from my corporate overlords,
but from a colleague who posted it on the member-only listserv of the American
Association for Public Opinion Research. The key quote:

Despite the strong skepticism, Internet-based
survey results are likely to get some publicity during the 2008 elections, and
executives from companies that conduct these surveys hope that they can use the
attention to gain credibility for their methods.

YouGov, for example, has formed a partnership with
Polimetrix, an online survey company based in Palo Alto,
Calif., for surveys in the United States.
Polimetrix, with a panel of one million people, plans to track the 2008
presidential election with a 50-state survey covering a minimum of 1,000
panelists in each state.

"State-by-state election results are an important
way for us to prove that our methodology delivers accurate results," said
Douglas Rivers, a Stanford University
political science professor who founded Polimetrix in 2004. "You can be lucky
once, but not 50 times."

Professor Rivers said that the margin of error for
Polimetrix surveys is similar to that of polls conducted by telephone. YouGov
said that its own results in recent British elections were as close or closer
to the actual votes than traditional polling methods.

Oh, so many conflicts, here. I also serve on the executive
committee of the American Association for Public Opinion Research (AAPOR), which
has condemned as
"misleading" the reporting of a margin of sampling error associated with opt-in
panels. I supported that statement and, as regular readers know, have been critical
of online surveys that report a "margin of error. So on what basis can Rivers
claim that "the margin of error for Polimetrix surveys is similar to that of
polls conducted by telephone?"

Leaping at the opportunity to bite the hand that feeds me, I
emailed Doug Rivers for his comment. His response on the narrower issue of
sampling error is a bit technical, and I have copied it below. His larger argument
is about how all surveys deal with bias. Telephone surveys that begin with
random samples of working telephone numbers suffer some bias in representing
all adults due to those who lack landline phone service (coverage) or refused
to participate in the survey (non-response). We know, for example that they
tend to under-represent younger Americans and those who live in more urban
areas. To try to reduce or eliminate this bias, pollsters currently weight by
demographic variables (such as age, race, education level and urban/rural

As a pool of potential respondents "opt-in" Internet panels
also suffer a bias -- how big is also a matter of some debate -- because panel
members must have Internet access, discover the survey panel (usually through an
advertisement on a web site) and volunteer to participate. Rivers believes his "sample
" technique will ultimately do a better job reducing the bias of
the opt-in panel universe than standard weighting does to reduce the bias in
standard telephone surveys. That is the crux of his argument.

Like a lot of my colleagues, I remain skeptical, but nonetheless committed
to an empirical evaluation. As I wrote
in Public Opinion Quarterly in 2005 (well
before I had any business relationship with Polimetrix):

At what point, if ever, might we
place greater trust in surveys drawn from opt-in panels? The only
way we will know is by continued experimentation, disclosure, and
attempts to evaluate the results through the Total Survey Error
framework. Opt-in panels are gaining popularity, whether we approve
or not. We should encourage those who procure and consume such research
to do so with great caution and to demand full disclosure of methods
and results. If nonprobability sampling can ever routinely deliver
results empirically proven more valid or reliable, we will need to
understand what produces such a result.

In a few days, Charles Franklin and I will begin posting the
findings we presented at the AAPOR conference, which include an assessment of how
the Polimetrix and other panel surveys did in 2006. This is obviously a debate
that will continue in the world of survey research, and we will try to follow
along, conflicts and all.

PS: Doug Rivers response regarding the "margin of error:"

Standard errors measure sampling
variability & sampling variability is easy to calculate when observations
are independent, which they definitely are for large opt-in panels and phone
surveys. (The standard error calculation is more complicated for cluster
samples, where the observations aren't independent, but these samples aren't
clustered; matching introduces another source of variability, but the effect on
standard errors is relatively small.) The MSE [mean squared error] calculations
involve squared bias and my ambition is to beat your average phone survey in
MSE by bias reduction.