Looking Back at the UK Polls

Looking Back at the UK Polls
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

My column for this week looks back at how the exit poll, the pre-election polls and seat projections did in last week's elections in Great Britain. Please click-through and read it all.

Just after I filed my column on Friday, Joe Lenski, the co-founder of Edison Research, the company that conducts exit polling for the U.S. television networks, posted the following comment on AAPOR's member listserv (reproduced here with his permission). After noting the remarkable accuracy of the projection they made as the polls closed (a subject I explore in the column), he concludes:

When you consider how complicated the British electoral system is and
that the projections are being made on the results of 650 separate
constituency races using exit poll interviews from only 130 polling
stations this is a spectacular achievement.

Their success also highlights up the challenge that Lenski faces every 2 to 4 years. The U.K. exit pollsters conducted just one survey. On November 4, 2008, the U.S. exit pollsters fielded 51 separate surveys for which they dispatched over 1,000 interviewers to conduct more than 100,000 interviews.

Last week, the U.K. exit pollsters asked just one question (vote preference) because they were charged with just one task: estimating the number of seats won by each party. The typical November 2008 U.S. exit poll asked voters to answer at least two dozen questions about who they were (demographically) and why they made the choices they did.

U.S. exit polls are designed mostly to help explain the results. Their predictive role is mostly supplementary. They help confirm that blow-out contests are really blow-outs, but when the outcome is in any doubt, network election analysts wait for actual vote counts from randomly selected precincts or, if necessary, from all precincts before "calling" the outcome

The point here is that the term "exit poll" can mean very different things in different places, and the design choices are ultimately up to the television networks and other news organizations that pay for them.

Two minor notes about the column: Technically, the results are known for all but one of the 650 constituencies. The Thirsk & Malton district will not hold its election until May 27 due to the death of one of the minor party candidates in late April. Since the Conservatives won Thirsk & Malton by a huge margin in 2005, most consider this a "safe" conservative seat. Thus, with Thirsk & Malton included, the final result is likely to be 307 seats for the Conservatives, 258 for Labour and 57 for the Liberal Democrats.

Finally, for those wanting to dig deeper into scoring the accuracy of individual pollsters and prognosticators, David Shor has taken a first stab on his Stochastic Democracy blog.

Update: PoliticsHome just posted their own look back at how their model performed. It's worth reading in full, but here are two key excerpts of the commentary from Rob Ford (which follows up on the lengthy four-part exchange between Ford and Nate Silver):

The [PoliticsHome] model did perform well, although there was a large slice of luck involved. We were in fact wrong to assume that the Tories would outperform in the marginals, but this was balanced by Lib Dem underperformance everywhere to deliver roughly the right result.

We did see very clear patterns of differential swing in Scotland, as we predicted, although the differences were even larger than the polls had suggested. There were also differential patterns in Wales and in seats with large ethnic minority populations. These would both have been near the top of my list of expected differential effects, but we had no polling evidence on them so did not incorporate them in our model.

[...]

The big story, though, with regards the UNS vs differential swing debate is that the pattern of swing was remarkably uniform:

The change in Conservative vote varied by less than two percentage points moving from their weakest to their strongest areas, and they actually underperformed somewhat in their weakest areas relative to the average

The change in Labour vote varied somwehat more, but there was no systematic relationship with prior strength - if anything the party performed worse in areas where it started off somewhat weaker.

The change in Liberal Democrat vote showed more evidence of proportionality, falling back three points in the strongest areas while rising in the weaker areas. But even here the evidence of proportional swing was weak and patchy at best.

Given the lack of any clear relationship between prior strength and outcomes, we would expect proportional swing based models to perform quite poorly, and so it has proved.

Update 2: I neglected to link to the "early postmortem" from Anthony Wells of the UK Polling Report, who has and will continue to post much more on this subject.

Also, the British Polling Council issued its own statement and scoring of the accuracy of this year's final polls:

While not proving as accurate as the 2005 polls, which were the most accurate predictions ever made of the outcome of a British general election, the polls nevertheless told the main story of the 2010 election -- that the Conservatives had established a clear lead. All but one of the nine pollsters came within 2% of the Conservative share, and five were within 1%.

The tendency at past elections for polls to overestimate Labour came to an abrupt end, with every pollster underestimating the Labour share of the vote, though all but one were within 3%. However, every pollster overestimated the Liberal Democrat share of the vote.

Popular in the Community

Close

What's Hot