THE BLOG

Rubbish?

05/30/2008 02:24 pm ET | Updated May 25, 2011

By now your favorite political blog has probably informed you of David Runciman's essay in the London Review of Books in which he reviews the Obama-Clinton race from a British perspective and includes this broadside against American polling:

Yet if the voting patterns have been so predictable, why have the polls been so volatile? One of the amazing things about the business of American politics is that its polling industry is so primitive. Each primary has been preceded by a few wildly varying polls, some picking up big movement for Clinton, some for Obama, each able to feed the narrative of a contest that could swing decisively at any moment. All of these polls come with warnings about their margins of error (usually +/–4 per cent), but often they have been so far outside their own margins as to make the phrase ridiculous. A day before the California primary in February, the Zogby organisation had Obama ahead by 6 per cent – he ended up losing by 9 per cent. In Ohio, the same firm put Obama ahead by 2 per cent just before the actual vote – this time he lost by 10 per cent. The sampling of national opinion is even worse. Before the Indiana primary, two national polls released at the same time claimed to track the fallout from the appearance of Obama’s former pastor Jeremiah Wright on the political stage. One, for the New York Times, had Obama up by 14 per cent, and enabled the Times to run a story saying that the candidate had been undamaged. The other, for USA Today, had Clinton up by 7 per cent, leading the paper to conclude that Obama was paying a heavy price.

The reason for the differences is not hard to find. American polling organisations tend to rely on relatively small samples (certainly judged by British standards) for their results, often somewhere between 500 and 700 likely voters, compared to the more usual 1000-2000-plus for British national polls. The recent New York Times poll that gave Obama a 12 per cent lead was based on interviews with just 283 people. For a country the size of the United States, this is the equivalent to stopping a few people at random in the street, or throwing darts at a board. Given that American political life is generally so cut-throat, you might think there was room for a polling organisation that sought a competitive advantage by using the sort of sample sizes that produce relatively accurate results. Why on earth does anyone pay for this rubbish?

The polling misfires of the 2008 primary season are certainly a fair target for criticism and debate, but Runciman's diagnosis of the problem is both misleading and flawed.

First, Runciman does not compare "apples to apples," as the British polling blogger Anthony Wells puts it:

American polls normally quote as their sample size the number of likely voters, it is typical to see a poll reported as being amongst 600 “likely voters”, with the number of “unlikely voters” screened out to reach that eventual figures not made clear. In contrast, British polling companies normally quote as their sample size the number of interviews they conducted, regardless of whether those people were filtered out of voting intention questions. So, voting intentions in a UK poll with a quoted sample size of 1000, may actually be based upon 700 or so “likely voters”.

To give a couple of examples, here’s ICM’s latest poll for the Guardian. In the bumpf at the top the sample size is given as 1,008. Scroll down to page 7 though and you’ll find the voting intention figures were based on only 755 people. Here’s Ipsos-MORI’s April poll - the quoted sample size is 1,059, but the number of people involved in calculating their topline voting intention once all the unlikelies have been filtered out was only 582.

Let's also consider a few recent U.S. national polls. This week's Pew Research survey sampled 1,505 adults, 1,242 registered voters and 618 Democratic and Democratic-leaning registered voters. The Gallup Daily tracking survey typically reports on more than 4,000 registered voters and more than 1,200 Democratic and Democratic leaning "voters." Last week's Newsweek survey screened 1,205 registered voters from 1,399 adults, and in the process interviewed 608 "registered Democrats and Democratic leaners." Some pollsters use smaller samples, some bigger, but when it comes to national surveys of general election voters, American surveys are at least as large if not larger than their British counterparts.

Runciman confuses things further by comparing national British surveys to the U.S. polling in low turnout, statewide primary elections. In 2004, 61% of eligible adults voted in the U.S. presidential election, but during the 2008 primary season the typical turnout -- while higher than usual -- typically ranged from 25% to 35% (including both Republican and Democratic primaires). "The challenge for U.S. pollsters," as Wells puts it,

is filtering out all those people who won’t actually take part. Getting lots of people per se can be a bad thing if those people won’t actually vote, the aim is getting the right people. Considering the rather shaky record of most British pollsters in some low turnout elections like by-elections, Scottish elections, the London mayoralty and so on, we really aren’t the experts on that front.

Setting aside Runciman's fallacious "our polls are bigger than yours" theme, the biggest problem with his overall argument is the assumption that larger samples would solve all problems. If only that were true. Runciman notices that actual election returns in the U.S. primaries have often" been so far outside their own margins as to make the phrase ridiculous." That's right. If the poll has a statistical bias (in sampling or in the way it selects likely voters), doubling or tripling the sample size will not solve the problem. Remember: the "margin of error" only covers the random variation that results from drawing a sample rather than trying to call all voters. It tells us nothing about other potential survey errors.

Here is one obvious example. Take a look at the final round of polls before the Democratic primary in Pennsylvania. Which pollster had the largest sample size? The winner on that score, by far, was Public Policy Polling (PPP) with 2,338 interviews of "likely voters" conducted Sunday and Monday before the election. And which pollster had the biggest error? The same pollster, the one with the biggest sample (and this example may be unfair to PPP -- they had better luck elsewhere this year).

Rubbish indeed.

[Typo corrected].