Lost in the attack memos and other questions raised is an important question facing nearly every telephone survey conducted in House, Senate and Gubernatorial races this year: Are we at the point where the majority of true "likely voters" under the age of 35 are out of reach of landline telephone samples? And at what point is simply "weighting up" those younger voters that pollsters can still reach inadequate to solve the problem?
The table below, produced by the Pew Research Center and based on their national surveys, shows that by 2006 their unweighted landline samples were under-representing roughly a third of adults under age 35. And that was as of three years ago, when the percentage of all adults living in landline-only households was estimated at 12%, nine percentage points lower than the most recent estimate:
Now consider the estimated growth in the cell-phone-only population over the last three years. As shown in the chart below (which comes from a report last year by the National Center for Health Statistics of the Centers for Disease Control and Prevention), landline-only samples are most likely to miss voters under age 35.
Now consider this additional statistic reported on Pollster.com by Mike Mokrzycki in December. On the most recent CDC report covering the first half of 2009, nearly two thirds (63.5%) of people age 25-29 live in households with either no landline phone (45.8%) or in "cell-mostly" households (17.7%), those were "all or almost all calls are received on cell phones."
So what should a pollster do if they reach so few 18-to-34-year-old voters that they make up just 1% of the likely voters sample for an election where past turnout suggests that age group should make up roughly 10% of the electorate? If the pollster believes they have under-represented younger voters, can they simply weight to correct the problem? Not if the shortfall is that extreme. In a sample with only 400 or 500 completed interviews, such a weight would multiply 4 or 5 interviews by a factor of 10. As I wrote in the column, you don't need to be a statistician to imagine how those "super respondents" might crate greater error and volatility in the results, especially those produced by cross-tabulations of demographic subgroups.
Let's remember that we are able to pick at SurveyUSA because they were willing to disclose the weighted demographics of their sample and because they opted against any such extreme weighting in this case. So rather than beat up on SurveyUSA, we might do better to ask: How many polls have we seen in recent months that involved a similarly sparse number of younger likely voters and were simply weighted up by factors of 5 or greater to conceal the shortfall? How would we know?
Finally, whatever we want to make of the Firedoglake surveys, it is important to remember that SurveyUSA has maintained an outstanding record of final-poll accuracy, especially in U.S. House elections and in hard-to-model primary elections. For House races, the company's own scorecard -- which I have no reason to doubt -- shows that their average error on the margin in polling 27 House races in 2006 (3.4) was roughly half that of all other pollsters combined (6.3). Their error rate was also significantly lower than the three most prolific public pollsters that year, Research2000 (5.5), Zogby (5.9) and RT Strategies (5.9).
So since we have picked at their work mercilessly, I want to give SurveyUSA's Jay Leve the last word and reproduce the full email he sent me last week in response to my questions about the Firedoglake surveys:
In August 2002, SurveyUSA released a poll showing US Senator Robert Torricelli (D-NJ) trailing. No survey to that point had showed Torricelli trailing. An hour after the poll was released, SurveyUSA's client, CBS-TV in Philadelphia, called SurveyUSA and said, "Put your helmets on. The DSCC is coming after you." And the DSCC did. The DSCC found a journalist willing to write the smack that the DSCC was shoveling, and the message went forth: Nothing wrong with Robert Torricelli, plenty wrong with SurveyUSA.
A few weeks later, Torricelli dropped out of the race. Other polls had the same results as SurveyUSA.
Fast forward to today: In a poll conducted in January 2010, at a time the Democrats were losing the state of Massachusetts, SurveyUSA finds an incumbent Democrat in a tight fight in New York state. The DCCC is unhappy. Partisans start shoveling smack. "Sources" start providing willing journalists with leaked memos. Nothing wrong with Democrat Tim Bishop. Plenty wrong with SurveyUSA.
The highway to high office is littered with the road kill of political operatives who find it easier to campaign against a poll than an opponent.
Lost in the hurly burly is an opportunity for real reflection. To my knowledge, there has never (ever) been a publicly released telephone poll conducted in a U.S. congressional district that included a known subset of interviews with respondents who did not have a home (aka: landline) telephone. An acknowledged limitation of SurveyUSA's work in NY-01, and a known limitation to date of all congressional district polling, is that voters who do not have a home phone are under represented. At a statewide-level (in contrast to the CD level), only one pollster in the 2009 election included a known subset of cellphone-only respondents in its sample (at extraordinary expense, because of the theoretical justification), and that pollster's results were worse than many polling firms who did not include a known subset of cell-phone-only respondents. Whether one anticipates that in 2010 young voters will turn out in record numbers of stay home in record numbers, the problem of how to count those voters is real, and right before us.