Huffpost Politics
The Blog

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors

John Garrett Headshot

Why Cantor's Polling Really Failed: The One Where a Pollster Isn't Making Excuses for the Industry

Posted: Updated:
ERIC CANTOR
Mark Wilson via Getty Images

After losing in historic fashion on Tuesday -- the first ever sitting Majority Leader to lose in a primary -- Eric Cantor would be excused if he couldn't find his jaw. Not only did he lose, but no one saw this coming -- least of all his pollster who had him up 34 points in an internal survey conducted a less than two weeks ahead of the election.

In the end, Cantor lost 56-44 to Dave Brat, an economics professor at Randolph-Macon College. How could Cantor -- with a $5 million war chest -- lose to a political novice who raised barely $200,000? Per opensecrets.org, Cantor's campaign spent almost that much ($168,000) on steakhouses.

A lot has been written on why he lost. Was turnout -- up 38% from 2012 -- the deciding factor, and if so, what accounted for that? Was it Brat's support from Tea Party die-hards or even sneaky Dems voting for Cantor's opponent in Virginia's open primary? Was it because Cantor completely lost touch with his own district -- he wasn't even in his district (the northernmost portion is a mere 60 miles from the Capitol) on primary day? Was it because Cantor was too soft on immigration?

But this isn't about why he lost. It's about how he could have lost so badly and no one saw it coming! On the contrary, everyone predicted he would win. What went wrong here?

The two points may not be unrelated, but all of it has something to do with the difficulty -- the problem, really -- with polling today. For most of the last few decades, we have gotten used to polls being quite accurate. In the decades between Dewey and Gore, things mostly held up as predicted. (We can still debate Gore.) But times have changed. And yet polling has not. A lot of excuses are being made, such as in this Frank Luntz New York Times op-ed, for how pollsters can miss these things or how too much is expected of them. But these are not solutions.

Here's the problem in one word: landlines.

Pollsters rely heavily on landline phones for their responses to the surveys. That sentence almost bears repeating, despite the bold, italics, and underline. Established statistical methods require everyone in the anticipated universe to have an equal chance of being included in the survey sample in order for it to be representative of that universe. If you are trying to reach people by telephone only -- particularly landline -- you are assuming those people you can reach with this mode are representative of those who are not reachable by the same mode. But what if they are different? Well, then you are either talking to the wrong people or missing some of the right ones or both.

In this case, Cantor's pollster's claim of "it was them Dems" could also apply. He was wrong because his anticipated universe was wrong, i.e., he was talking to the wrong people. Similarly, PPP's snap poll after the results claiming Cantor didn't lose because of immigration would be completely wrong on its surface because that poll showing that the district supports immigration was of registered voters, not "2014 primary voters," a group that makes up only 13% of registered voters. That would be like us conducting a survey of the US to assess the attitudes of Californians. A portion of the universe is the right universe, but the overall universe includes a whole lot of people who are entirely irrelevant here.

That's part of the problem.

But the problem is actually much bigger. To return to it, the problem is the landline telephone. Again, pollsters depend on the landline. They call cell phones, yes, but they depend on the landline. This completely ignores something obvious to us all if we just think about our own lives for a moment. Technology and behavior have changed. More than a third of us are cell-phone only households. A majority of us are cell-phone only or cell-phone mostly. Response rates to telephone surveys is below 10%. So if you're reaching people on their landlines -- where most people have caller ID and screen calls anyway -- who are you actually talking to? How often do you use your landline, if you even have one? How often do you answer the phone before you know who is on the other end of the line? We used to say "Hello" when we picked up the phone. Now, we say "Hey, David."

There are a lot of smart pollsters out there, and for sure, some have managed to survive this tectonic shift by applying their smarts to old methods. But times have changed, and political pollsters mostly haven't. The number of Cantors in the world is only going to increase as we go forward. The winners will be those who figure out how to reach people where they are -- hint: they are not on landlines -- and still apply, as best as possible, accepted statistical practices. For our own domestic polling, we blend our sample with online, mobile, cell phone, and landline -- essentially all the different ways you can reach people short of mail or face-to-face -- and take measures to ensure our sample conforms to accepted statistical norms. Is it perfect? No. But it acknowledges certain realities that landline-heavy dialing does not.

The time for hiding behind outdated norms and reference to a mystical "gold standard" as the defense of the landline is coming to an end. Just as traditionalists once poo-pooed telephone surveys over face-to-face, so too must the modern American political pollster embrace new methodologies or risk finding themselves the punchline to the next Eric Cantor joke.