The big news yesterday for true political junkies was the release of separate polls conducted simultaneously in 27 of the most competitive districts nationwide (with surveys in three more districts ongoing) using an automated recorded voice rather than live interviewers. The surveys were conducted for a project dubbed "Majority Watch" by the team of RT-Strategies, a DC based firm that polls for the Cook Political Report, and Constituent Dynamics, a company that specializes in the new automated methodology. While the slick Majority Watch website provides full crosstabs if you click far enough, many readers have asked, are these surveys legitimate? Are they reliable? The best answer, to paraphrase the Magic Eight Ball is, "reply hazy, ask again later."
The formal name for the automated methodology is Interactive Voice Response (IVR). Two companies - SurveyUSA and Rasmussen Reports - have conducted IVR surveys for years. While those companies do many things differently, both typically sample using a random digit dial (RDD) methodology that has the potential to reach every working land-line phone in a particular state. Unlike traditional surveys, the IVR polls use a recorded voice, rather than a live interviewer, and respondents must answer by pressing the keys on their touch-tone telephone. With IVR, the pollster's ability to randomly select a member of each sampled household is also far more limited.
The Majority Watch surveys add a few new twists. My friend Tom Riehle of RT Strategies kindly provided some additional details not included on the Majority Watch methodology page:
1) Majority Watch drew its samples from lists of registered voters rather than through random digit dial sampling. The advantage of this approach is that it solves the problem of how to limit the survey to those living in the correct district (a big challenge with RDD sampling). It also excludes non-registrants and allows the use of individual level vote history to determine who is a "likely voter."
The downside to voter list sampling - sometimes called Registration Based Sampling (RBS) - is that it only covers voters that have either provided their phone number to the registrar of voters or whose numbers are listed in public phone directories. "Match rates" (the percentage of voters on the list with working phone numbers) vary widely from state to state and district to district, but rarely exceeds 60%. If the uncovered 40% (or more) differ in their politics, a bias can result.
Pollsters continue to debate the merits of RDD and RBS sampling, and that debate deserves more attention than I will give it today. The short story is that most media pollsters continue to use RDD sampling, especially for national polls. Internal campaign pollsters have been making far greater use of list sampling, especially at the Congressional District level where they use RBS almost exclusively.
2) Majority Watch used individual vote history to select the "likely voters." The lists provided in most states by the registrar of voters typically reports vote history. If you voted in the 2004 presidential election, but not in that school board election in 2005, the list will say so. It is a matter of public record. Majority Watch used an approach common to campaign pollsters: The sampled only those who cast votes in at least two of the last four general elections in their precinct (which included "off-year contests" in 2005 and 2003). What this means, in effect, is that most of their respondents voted in both the 2004 presidential election and at least one general election race.
Majority Watch used based their "likely voter" model entirely on vote history from the list, and did not ask "screen" questions to select their sample.
3) The Majority Watch pollsters used an interesting approach to selecting a random voter in each household and matching the interviewed respondent to the actual voter on the list. They randomly selected one voter to be interviewed within each household, but then used the automated method to interview whoever answered the phone. The interview included questions asking respondents to report their gender and age. After each interview, a computer algorithm checked to see if the reported gender and age matched the data for that individual on the voter file. If the gender and age data did not match, they threw out the interview and did not include it in their tabulations.
4) According to the methodology page, the Majority Watch pollsters then weighted their data "to represent the likely electorate by demographic factors such as age, sex, race and geographic location in the CD." But how did they determine the demographics of the likely electorate in each district? The answer is surprisingly complicated.
They obtained data on each district reported by the U.S. Census as part of the Current Population Study (CPS). Keep in mind, as noted in a post last week, the CPS is also a survey (albeit with a huge sample size and very high response rate), subject to some of the same over-reporting of voting behavior as other surveys.
The Census publishes data for gender and race by Congressional District, but not for age. So the Majority Watch pollsters created their own estimate of the age distribution by applying state level CPS estimates of turnout by age cohort to the district level estimates of age for all adults. If that last sentence was confusing - and I know it was - don't worry. Just note that the estimate of age for "2002 Voters" provided in the lower left portion of each district page on the Majority Watch site is their estimate, as extrapolated from statewide CPS data and not an official Census estimate.
Also note something that has confused many readers that have looked at the Majority Watch web site. All of the demographic data that appears on their district level page is taken (or derived) from U.S.Census data. It is not based on data from their surveys!
Finally, those who have drilled down deep into the Majority Watch crosstabs will notice that the age distribution on the poll is older than their Census-based age estimate. That is because the Majority Watch pollsters also looked at the estimated age distribution obtained directly from the voter lists (based on the birthdays voters provide when they register to vote). This subject is definitely worthy of more discussion, but voter lists consistently show an older electorate than the CPS survey estimates. The Majority Watch pollsters set an age target that was a meld of their CPS estimate/extrapolation and the list estimates, and weighted the data to match. How accurate and reliable is this approach? I have no idea, and am quite sure other pollsters will see shortcomings.Here is the bottom line for those wondering how much faith to put in the Majority Watch data: Political polling gets considerably more difficult at the congressional district level. While the Majority Watch approach is innovative, it is also new and untested, and it includes a lot of departures from the standard survey practice. And, according to Tom Riehle, the design of the survey may evolve between now and Election Day (and yes, future tracking surveys are planned):
We are privately taking the methodology and results to some tough critics to find out what questions they ask that we may not have thought to ask, in order to keep moving the quality closer to the best quality that can be achieved in telephone interviewing. In that sense, this is a work in progress, because we have made our best effort to develop an excellent methodology, and will continually improve that methodology based on the informed and legitimate questions of methodological critics.
The Majority Watch surveys may turn out to yield reliable results, or they may not. We really will not know until we watch how they compare to other public polls and the ultimate election results. And here at Pollster.com, we are hoping to help you do just that.