More on the "Convergence Mystery"

More on the "Convergence Mystery"
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

My colleague David Moore has posted here and on AAPOR's Survey Practice about the intriguing convergence of the national polls at the very end of this year's fall campaign (I made a similar observation in assessing the final round of national polls last month). I wondered whether the same phenomenon occurred at the state level. What follows is some preliminary data and my own answer to Moore's question about why this happens.

First, let's review Moore's data. His Survey Practice posting includes the following table, which presents both the average Obama lead and the variance in that lead on national surveys broken out across six different time periods on October.

As Moore points out, "Obama's average lead each week varies only slightly over the whole month of October," yet the "substantial" variability of the polls drops off significantly in the final week. "Why," Moore asks, "do different polls show such variability over the month of October, and then suddenly converge in the last week of the campaign?" He continues:

Of course, it's true that opinions "crystallize" in the final weeks, but why should that make polls so relatively unreliable during the campaign? Shouldn't polls conducted at the same time produce the same results, even if many people are still mulling over their decisions? Shouldn't different polls find the same proportion of indecisive people?

Moore adds that the editors of Survey Practice will be asking media pollsters for their explanation and analysis of the "convergence phenomenon," reactions that they plan to publish.

I wondered whether we would see the same pattern in statewide surveys. So I replicated Moore's calculation within the twelve battleground states in which we logged 20 or more polls during October and November (Colorado, Florida, Georgia, Minnesota, Missouri, Nevada, New Hampshire, North Carolina, Ohio, Pennsylvania, Virginia and Wisconsin). Then it calculated the mean and median variance across all 12 states, along with the average Obama margin over McCain in each.

As the table below shows, the same general pattern occurs: Even though Obama's average lead remains roughly constant throughout October, the variance in that lead is even larger at the state level before a marked convergence over the last week of the campaign [I have also copied a table showing the values for each state individually, including counts of polls in each, after the jump].

So what's the explanation for this convergence phenomenon? I too look forward to reading the response of the media pollsters in Survey Practice, but I have my own theory. I know that many will wonder, as reader DTM does, "if there was some deliberate convergence." A deliberate "thumb on the scale" may have happened in some cases. But I want to suggest a more benign explanation.

Just about every political pollster that I know cares deeply about the accuracy of that last poll before the election. It is, as the cliche goes, the pollster's "final exam" every two years. Our clients use that final poll as a test of our accuracy, and our credibility and business are at stake every election.

So now put yourself in the position of a pollster who is producing results in any given race that are out of line with other public polls. Trust me, everyone notices, and your paying clients will ask you why your survey is so different from the rest. Every pollster in that position will check everything they can about their own survey and methods -- the sample, the interviewing procedures, the weighting, the screen questions and so on. Keep in mind that even the most by-the-book media pollsters leave themselves a fair amount of room for subjective judgment calls about their likely voter screen questions, models and weighting procedures.

Now consider this basic observation: Pollsters that look hard enough for something amiss will likely find and "correct" it. Those that don't -- those that feel comfortable that their estimates are in line with other pollsters -- will not look as hard for "problems" and, thus, are less likely to find anything in need of "fixing." Take that underlying tendency and apply it to the actions of the many different pollsters over the final weeks in October and I think you have an explanation for why results tend to converge around the mean.

This is just a theory, of course, but whatever the explanation, it should leave us questioning the value of trying to sort out "good" pollsters from "bad" using accuracy measurements based only on "the last poll."

The following table shows the results for each state individually:

Popular in the Community

Close

What's Hot