Kaus and Chris
Bowers at MyDD noticed that Rasmussen Reports has been showing a much
closer race on their automated national
tracking of the 2008 Democratic presidential primary contest. Both floated
different theories for that difference that imply that the Rasmussen's numbers
are a more accurate read. This post takes a closer look at those arguments,
although the bottom line is that hard answers are elusive.
The chart below shows how the recent Rasmussen surveys
compare to the trend for all other conventional polls as tracked by Professor
Franklin here at Pollster. The bolder line represents the average trend across
all conventional surveys, while the shorter narrow lines connect the recent
Rasmussen surveys. Click the image to enlarge it, and you will see that all but
one of the Rasmussen surveys shows Barack Obama running better than the overall
trend. The Rasmussen results for Clinton
show far more variability, especially during the first four weeks of
Rasmussen's tracking program. They show Clinton
running worse than other polls over the last three weeks. Note that a new survey
released overnight by
Gallup (that shows Clinton's lead "tightening") has not altered
the overall trend.
Of course the graphic above includes survey questions that continue
to include Al Gore on the list of candidates. In order to reduce the random
variability and make the numbers as comparable as possible, I created the following
table. It shows that Clinton leading by an average of roughly 15 points (38.6%
to 23.8%) on the three most recent conventional telephone surveys, but by just
5 points (33.0% to 28.3%) on the three most recent Rasmussen automated surveys
(surveys that use a recorded voice and ask respondents to answer by pressing
buttons on their touch tone phones). Given the number of interviews involved,
we can assume that these differences are not about random sampling error. Something
is systematically different about the Rasmussen surveys that has been showing a
tighter Democratic race over the last three weeks.
But what is that difference? That's a tougher question to answer.
Here are some theories, including those suggested by Bowers and Kaus:
1) The automated methodology yields
more honest answers about vote choice (and thus, a more
accurate result). The theory is that some people will hesitate to reveal
certain opinions to another human being, particularly those that might create
some "social discomfort" for the respondent. Thus, Kaus provides his "Don"t Tell Mama"
theory: "men don't like Hillary but are reluctant to say so in public" or to
"tell a human interviewer -- especially, maybe, a female interviewer."
2) The people sampled by
Rasmussen's surveys are more representative of likely Democratic primary
voters because it uses a tighter screen. Chris Bowers makes
that point by arguing that the Rasmussen screen looks slightly tighter than
those used by other pollsters - "38-39% of the likely voter population" rather
than the "40-50% of all registered voters [sampled by] the vast majority of
national Democratic primary polls."
3) The people sampled by automated
surveys are more representative of likely primary voters because
they give more honest answers about whether they will vote. We
know from at least 40 years of validation studies that many respondents will
say they voted when they did not, due to the same sort of "social discomfort"
mentioned above. Voting is something we are supposed to do, and a small portion
of adults is reluctant to admit to non-voting to a stranger on the telephone. In
theory, an automated survey would reduce such false reports.
4) The people sampled by automated
surveys are less representative of likely primary voters because
they capture exceptionally well informed respondent. This theory is one
I hear often from conventional pollsters. They argue that only the most
politically interested are willing to stay on the phone with a computer, and so
automated surveys tend to sample individuals who are much more opinionated and
better informed than the full pool of genuinely likely voters.
Lets take a closer look at the arguments from Kaus and
Kaus makes much of the fact that the Rasmussen poll shows a
big gender gap, with Clinton
showing a "solid lead" (according to Rasmussen) among men, but trailing 11
points behind Obama among men. He wonders if other polls show the same gender
gap. While precise comparisons are impossible, all the other polls I found that
reported demographics results also show Clinton doing significantly better
among women then men (Cook/RT
Strategies, CBS News,
Time and the Pew Research
Center). Rasmussen certainly shows Obama doing better among men than the
other surveys, but then, Rasmussen shows Obama doing better generally than the
Kaus also offers a "backup" theory:
Of course (if it turns out the
gender gap in the two polls is roughly comparable) it could be that many men and
many women don't like Hillary but are reluctant to say so in public. (if it
turns out the gender gap in the two polls is roughly comparable) it could be
that many men and many women don't like Hillary but are reluctant to
say so in public.
His backup may be plausible, especially when interviews are
conducted by women, although we obviously have no hard evidence either way.
Bowers' theory feels like a better fit to me, especially if
we also consider the possibility that the absence of an interviewer may
reduce the "measurement error" in the selection of likely voters. The bottom
line, however, is that we really have no way to know for sure. It is certainly possible,
of course, that the Rasmussen's sampling is less accurate. All of these
theories are plausible, and without some objective reality to use as a
benchmark, we can only speculate about which set of polls is the most valid.
What strikes me most, as I go through this exercise,
is how little we know about some important methodological details. What are
the response rates? Are Rasmussen's higher or lower than conventional polls? How many respondents answered the primary vote questions on recent surveys conducted by ABC News/Washington Post, NBC/Wall Street Journal and Fox News and the most recent CNN survey? Many
pollsters provide results for subgroups of primary voters, yet virtually none
tell us about the number of interviews behind such findings. We also know
nothing of the demographic composition of their primary voter subgroups, including
gender, age or the percentage that initially identify as independent.
And how exactly do those pollsters that currently report on "likely
voters" select primary voters? How tight are their screens? Very little of information
is in the public domain (and given that these numbers involve primary results,
voter guide from 2004 is of little help).
I emailed Scott Rasmussen to ask about their likely voter
procedure for primary voters. His response:
with the tightest segment from our pool of Likely Voters... Dems are asked about
how likely they are to vote in Primary... Unaffiliateds are asked if they had the
chance, would they vote in a primary... if so, which one...
I am not completely sure what the "tightest segment" is, but
I my guess is that they take those who say they will definitely or certainly vote
in the Democratic primary. He also confirmed that the 774 likely Democratic
primary voters came from a pool of 2,000 likely voters. So last night I asked what portion of adults qualified as likelyvoters so we might do an apples-to-apples comparison of the relative "tightness"
of survey screens.
As of this writing, I have not received an answer. UPDATE: Via email, Scott Rasmussen tells me that while he did not have numbers for that specific survey readily available, the percentage of adults that qualify as likely general election is typically "65% to 70%...for that series." He promised to check and report back if the number for this latest survey are any different.
But with respect to all pollsters again, and not just Mr. Rasmussen, why
is so little of this sort of information in the public domain? Most media pollsters
pledge to abide by professional codes of conduct that
require disclosure of basic
methodological details on request. Maybe it's time we start asking for that
information for every survey, and not just those that produce quirky results.