Bowers Vs. 538 Vs. Pollster

Bowers Vs. 538 Vs. Pollster
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Chris Bowers posted a two-part series this week that compares the final estimate accuracy of his simple poll averaging ("simple mean of all non-campaign funded, telephone polls that were conducted entirely within the final eight days of a campaign") to the final pre-election estimates provided by this site and Fivethirtyeight.com.

Chris crunches the error on the margin in a variety of different ways, but the bottom line is very little difference among the methods. These are his conclusions:

  • 538 and Pollster.com even, I'm further back: Pollster was equal to 538 when all campaigns are included (the "1 or more" line) and with all campaigns except the outliers (the "2 or more" line). Kind of funny that not adjusting any of the polls, and adjusting all of the polls, results in the same rate of error. To no one's surprise, my method was much better among more highly polled campaigns, but still about 10% behind the other two once poll averaging (2 polls or more) comes into play. I make no pretense about my method needing polls in order to work.
  • Anti-conventional wisdom : 538 had the edge among higher-polled campaigns, which means Pollster.com was superior among lower-polled campaigns. This goes against conventional wisdom. Many thought Silver's demographic regression gave him an edge among less-polled campaigns, but that Pollster's method only worked well in heavily polled environments. Turns out the opposite was true, and I'm not sure why. Maybe Silver's demographic regressions don't work, but his poll weighting does. Or something.
  • Still very close : While I was a little behind, the difference between the methods is minimal. I'm a little disappointed, but clearly anyone can come very close to both 538 and Pollster.com in terms of prediction accuracy with virtually no effort. Just add up the polls and average them. It is about 90% as good as the best methods around, and anyone can do it.

You can see the full post for details, but his calculations are in line with what we found in our own quick (and as yet unblogged) look at the same data. We simply saw no meaningful differences when comparing the final, state-level estimates on Pollster to Fivethirtyeight.

Keep in mind that we designed our estimates, derived from the trend lines plotted on our charts, to provide the best possible representation of the underlying poll data -- nothing more and nothing less. So the accuracy of our estimates tells us that the poll data alone, once aggregated at the end of the campaign, provided remarkably accurate predictions of state-level election outcomes. The fact that the more complex models used at FiveThirtyEight were equally accurate raises the question: In terms of predictive accuracy, what value did Fivethirtyeight's extra steps (weighting by past polls performance and the various adjustments based on other data and regression models) provide?

Popular in the Community

Close

What's Hot