Approval Trends and Pollsters, Part 1

Approval Trends and Pollsters, Part 1
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

We get a lot of questions, comments and complaints about the effect particular pollsters have on our trend estimates. This is an important question, and today I'll start a series of posts on this issue. I want to encourage your comments and feedback. Over the series of posts I'll try to answer what I can, and we'll improve our approach when you raise points we aren't doing well enough and can improve on. The focus will be presidential approval, but many of the issues are generic.

Yesterday Mark posted on the Rasmussen daily tracker and whether IVR interview methodology was enough to explain the generally low approval readings from that poll. Here I want to extend this to address the frequent comment the Rasmussen is systematically distorting our trend estimates because his results are consistently below the trend line.

Let's start with a bit of data. Rasmussen represents 90 polls in the Obama series above, while Gallup's daily provides 87 polls and all other pollsters contribute 55 polls.

Even a casual glance at the figure makes it clear that the Rasmussen dailies run 2-3 points below the blue trend line, while Gallup's daily runs about the same above the trend. The other pollsters scatter widely around the trend.

The most common comment we get is that Rasmussen is clearly too low and is distorting our trend estimate downward. If we removed only Rasmussen, it is certainly true the trend estimate would shift up. But the problem is how do you "know" that it is Rasmussen who is wrong? As one commenter put it "It's so annoying to see Obama at 57% when everybody knows he's over 60%!!!" Well, that IS annoying if you "know" the truth, but how do we know the truth? When I talk to Republicans they are equally certain we "know" Rasmussen is right and that it is Gallup that is obviously wrong. How can we address this difference of views in a non-partisan, data oriented, way?

The best estimates we get for our trends are when we have lots of different polling organizations represented and none of them contribute a disproportionate share of the polls. When we get in trouble is the opposite, when one poll dominates and we have few other polls to calibrate against. An extreme case would be if we only had Rasmussen right now, or only had Gallup. Happily, that isn't the case.

At the moment we have 55 polls by firms other than Rasmussen or the Gallup daily (I include 3 USAToday/Gallup and 1 Gallup only polls which are not dailies). These 55 polls come from 23 different firms with the most from any one firm being 4 polls. This is just what we want for a standard of comparison-- lots of pollsters, none contributing too many.

This doesn't mean there are no house effects. Every polling organization has a house effect, some larger than others. But across all the pollsters we get heterogeneity in those effects with low balancing high and the result being the best estimate of the trend we can manage with polling data alone.

The chart above estimates the trend using only these 55 polls from the 23 non-daily pollsters. That trend is plotted by the black line. The blue line is our standard trend estimate, using all the polls, including the dailies. And for comparison I've show the trends for Rasmussen only and for Gallup daily only.

Clearly both Rasmussen and Gallup are quite different from the overall trend or from the non-daily trend. Pick your poison, neither of these is in agreement with the non-dailies. You can prefer high or you can prefer low, but the dailies are about equally far off the black trend.

But the key point for us is that the black line for non-dailies is very close to the standard blue trend using all the polls. The average absolute difference is barely 1 point (1.009, in fact) and 95% of the days find less than a 2 point difference between the blue and black lines. Sometimes blue is higher and sometimes black is higher. The average difference (not absolute difference) is that blue is 0.3 points below the black line. (The black line is a bit more variable because it uses only the 55 non-daily polls rather than all 232 polls.)

There are cases where we can't do this sort of analysis because of a lack of diversity in pollsters. Approval is a happy exception. It is clear there are pollster differences, but at this point they are not drastically affecting our results. If you SELECTIVELY exclude only low polls, then of course you can drive up the trend, just as you can selectively exclude only high polls and drive the trend down.

But when we take the most diverse collection of polls, we get pretty much the same trend estimates as we do with all the polls. (You can go to the interactive charts and pick what to include or exclude and see how big a range you can get. Selection of high or low polls is the key to making the trend move a lot.)

Now, this is only part 1 of this series. I'm not claiming our trends are infallible. Far from it! I know all too well that they can break when given too little data or various kinds of bad data.

In the next installment of the series I'll respond to your comments here, and show an example of a more problematic case.

Popular in the Community

Close

What's Hot