Cell Phones and Coverage Bias

Cell Phones and Coverage Bias
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Last Thursday the Pew Research Center released an analysis drawing on their extensive ongoing investigation of the impact of the growing cell-phone only population on conventional telephone surveys. It is a must read for anyone in the polling business. You may have also seen commentary on the report on Monday by Nate Silver and Chris Bowers, but I'd like to add a few thoughts.

For those new to it, the crux of the issue is that telephone surveys have traditionally relied on samples of landline telephone numbers. Unfortunately, the explosion of cell phone usage over the last 10 years places a rapidly growing number of "cell phone only" Americans out of reach of those surveys. In pollster lingo, this is a "coverage" problem. A lack of coverage will result in statistical bias if the out-of-reach segment of the population is both large and different from the rest.

What's new in this latest Pew report is the growing evidence of just such bias. Specifically:

The growth in cell-phone only households continues unchecked. The latest estimates from the National Center for Health Statistics shows that 25% of households (and 23% of adults) have cell phone service but no landline (another 2% of households have no telephone service at all). The cell-phone-only population has doubled (from 12% of adults) in just three years.


  • The side-by-side comparisons by the Pew Research Center, which has interviewed respondents on both landline and cell phones since 2006, now show non-coverage bias "appearing regularly" on landline-only surveys that have been fully weighted to correct for demographic imbalances:

    Of 72 questions examined [Since August 2009], 43 show differences of 0, 1 or 2 percentage points between the landline and dual frame weighted samples. Twenty-nine of the differences are 3 percentage points or more, all of which are statistically significant. Only one difference is as large as 7 points, while four others are 5 points and seven are 4 points.

    The Pew Research analysts are also confident that "the bias has grown in the last four years." In 2006, they did 46 similar comparisons and found not a single difference that exceeded two percentage points.


  • Last but not least, the bias appears to extend to one very critical political measure for 2010 (emphasis added):

    Weighted estimates from the landline sample tend to slightly underestimate support for Democratic candidates when compared with estimates from dual frame landline and cell samples in polling for the midterm congressional elections this year. The same result was seen in Pew Research Center polls throughout the 2008 presidential election. In the landline sample, Republican candidates have a 47%-to-41% margin over Democratic candidates on the 2010 generic horserace, but in the combined sample voters are evenly divided in their candidate preferences for this November (44% for each party).

    Two big cautions about that last bullet. First, the 2010 generic horserace comparison is based on just one survey from March that involved a landline sample of 1,442 registered voters and combined landline-cellphone sample of 2,070 (of whom, just 191 registered voters were cell-phone-only). While it is consistent with similar comparisons in 2008 and 2010 based on far more interviews, is possible that random variation exaggerated the bias in this single measurement.

    Second, although the apparent bias is consistent with what the Pew Center found on other measures in 2008, another Pew Research report found no clear evidence the the bias led to greater polling errors in statewide polling in 2008, virtually all of which was conducted over landlines only: "[T]he average candidate error for [237 polls in statewide] races was 1.9 percentage points, about the same as in 2004 (1.7 percentage points)."

    But the history of largely unbiased statewide and national polling in 2008 is no guarantee of a repeat performance in 2010, particularly given the rapid increase in the cell-phone only population. While some statewide pollsters -- most notably Quinnipiac University -- are planning to interview using "dual frame" surveys that interview over both landline and cell phones this year, the vast majority of pollsters will not.

    Why not? Calling by cell phone adds considerable expense and runs up against a federal law that bars pollsters from dialing a cell phone using any automated means. For live-interviewer polls, that means more time consuming hand dialing of cell phone numbers. For those using an automated method -- like SurveyUSA, Rasmussen and PPP -- the regulation is a total barrier. The automated (IVR) pollsters simply cannot interview respondents by cell phone.

    It is also worth reviewing the concluding "discussion" section of the Pew Research report for their review of why "dual frame" surveys of both landline and cell phones provide "no panacea for the coverage problem." The Pew Center estimates that in addition to the 23% of adults that are "cell-phone-only," another 17% are now "cell-phone mostly" -- Americans with both kinds of phone service who say they rely on cell phones for most of their calling. A survey that relies on cell phone sample to reach just the 23% with cell phones only may still under-represent cell-phone mostlys.

    Finally, thoughts on some suggestions Nate Silver made on Monday:

    Another approach, in the absence of calling cellphones, is to increase the sample size that one uses. Although there's not that much difference between calling an unbiased sample of 500 respondents or 1,000 (the associated margins of error are 4.4 points and 3.1 points, respectively), these differences are magnified if one relies upon upweighting results from smaller subsamples to correct for response bias, such as those for young voters or Hispanics.

    Finally, pollsters might want to consider weighting based on "non-traditional" criteria such as urban/rural status, technology usage, or perhaps even media consumption habits.

    If you read that quickly, it may have sounded as if simply doing more interviews would eliminate the sort of bias that the Pew Center report describes. I don't think that's the point Nate was trying to make, but just so there's no confusion: If the sample has coverage bias, it doesn't matter whether you do 500 interviews, 1,000 interviews or 100,000. More interviews won't fix the bias.

    That said, pollsters are already weighting their samples more severely than ever before to compensate for the purely demographic bias caused by the cell-phone only coverage problem. Bigger and more severe weighting makes for more random error. Pollsters should be increasing their reported "margin of error" to account for the additional "design effect" of all that extra weighting. Unfortunately, few do (but that's another story for another day).

    If pollsters can find newer non-traditional weighting schemes that can correct the sort of coverage bias described in the Pew report, those schemes will likely involve weights that are, for some respondents, even bigger and more severe. So Nate is right in recommending larger sample sizes to reduce the effectively larger "margins of error" (that have absolutely nothing to do with coverage bias).

    Finally, I ran his suggestion for "non-traditional" weighting by Pew Center's Leah Christian. First, Pew already weights by what is effectively an urban/rural measure: the population density of the respondent's county. Second, they are wary of weighting by technology usage or media consumption:

    The main issue with weighting to technology use or media consumption is the lack of reliable national parameters. In theory, someone could produce estimates from another survey that could then be used to weight a landline survey, but the reliability of these would depend a lot on the quality of the sample from the original survey and the stability of the estimates over time. Technology use and media consumption are much more variable over time (unlike many standard demographics), and I would have many of the same concerns as weighting to party identification.

    My point here is not to discourage innovation but to offer a reality check. When it comes to coverage bias, there are no easy answers. The very big changes in telephone usage are increasingly challenging our ability to obtain representative samples via telephone.

    Update: Pew Research's Scott Keeter emails to point out that their 2008 post-election analysis found that adding variables like marital status, income and home ownership to a regression model reduced cell-phone-only bias in predicting support for Obama. More details here.

    Popular in the Community

    Close

    What's Hot