THE BLOG

Why Is Rasmussen So Different?

12/01/2009 12:53 pm 12:53:22 | Updated May 25, 2011

Hardly a week goes in which I do not receive at least one email like the following:

Although I really appreciate you continually adding this "outlier" poll for your aggregated data, I do wonder why Rasmussen polling numbers are ALWAYS significantly lower and different than every other poll when measuring the President's job approval rating (with the exception of Zogby's internet poll)? How do Rasmussen pollsters explain this phenomenon and, more importantly, what is your explanation for this statistically significant ongoing discrepancy between Rasmussen and pretty much every other poll out there?

We have addressed variants of this question many times, but since this questions is easily the most frequently asked via email, it is probably worth trying to summarize what we've learned in one place.

Let me start with this reader's premise. Are Rasmussen's job approval ratings of President Obama typically lower than "every other poll?" The chart that follows, produced by our colleague Charles Franklin, shows the relative "house effects" for organizations that routinely release national polls based on the approval percentage. Rasmussen's Obama job approval ratings (third from the bottom) do tend to be lower than most other polls, but they are not the lowest.

2009-12-01_HouseFX-approve.png

Before reviewing the reasons for the difference, I want to emphasize something the chart does not tell us. The line that corresponds with the zero value is NOT a measure of "truth" or an indicator of accuracy. The numeric value plotted on the chart represents the average distance from an adjusted version of our standard trend line (it sets the median house effect to zero, producing a line that is usually within a percentage point of our standard trend line). Since that trend line is essentially the average of the results from all pollsters, the numbers represent deviations from average. Calculate house effects using a different set of pollsters, and the zero line would likely shift.

A related point: Readers tend to notice the Rasmussen house effect because their daily tracking polls represent a large percentage of the points plotted on our job approval chart. For the daily tracking polls released by Rasmussen and Gallup Daily, we plot the value of every non-overlapping release (every third day). As of last week, Gallup Daily and Rasmussen represent almost half (49%) of the points plotted on our charts (each organization claims 24% each). As such, their polls do tend to have greater influence on our trend line than other organizations that poll less often (see more discussion by Charles Franklin, Mike McDonald and me on the consequences of the greater influence of the daily trackers).

So why are the Rasmussen results different? Here are the three possible answers:

1) LIkely voters - Of the twenty or so pollsters that routinely report national presidential job approval ratings, only Rasmussen, Zogby and Democracy Corps routinely report results for a population of "likely voters." Of the pollsters in the chart above, PPP, Quinnipiac University, Fox News/Opinion Dynamics and Diageo/Hotline report results for the population of registered voters. All the rest sample all adults. Not surprisingly, most of the organizations near the bottom the house effect chart -- those showing lower than average job approval percentages for Obama -- report on either likely or registered voters, not adults.

Why does that matter? As Scott Rasmussen explained two weeks ago, likely voters are less likely to include young adults and minority voters who are more supportive of President Obama.

2) Different Question - Rasmussen also asks a different job approval question. Most pollsters offer just two answer categories: "Do you approve or disapprove of the way Barack Obama is handling his job as president?" Rasmussen's question prompts for four: "How would you rate the job Barack Obama has been doing as President... do you strongly approve, somewhat approve, somewhat disapprove, or strongly disapprove of the job he's been doing?"

Scott Rasmussen has long asserted that the additional "somewhat" approve or disapprove options coax some respondents to provide an answer that might otherwise end up in the "don't know" category. In an experiment conducted last week and released yesterday, Rasmussen provides support for that argument. They administered three separate surveys of 800 "likely voters, each involving a different version of the Obama job approval rating: (1) the traditional two category, approve or disapprove choice, (2) the standard Rasmussen four-category version and (3) a variant used by Zogby and Harris, that asks if the president is doing an excellent, good, fair or poor job. The table below collapses the results into two categories; excellent and good combine to represent "approve," fair and poor combine to represent "disapprove."

2009-11-30_rasmussen-experiment.png

The 4-category Rasmussen version shows a smaller "don't know" (1% vs. 4%) and a much bigger disapprove percentage (52% vs 46%) compared to the standard 2-category question. The approve percentage is only three points lower on the Rasmussen version (47%) than the traditional question (50%). As Rasmussen writes, the differences are "consistent with years of observations that Rasmussen Reports polling consistently shows a higher level of disapproval for the President than other polls" (make of this what you will, but three years ago, Rasmussen argued that the four category format explained a bigger "approve" percentage for President Bush).

We can see that Rasmussen does in fact report a consistently higher disapproval percentage for President Obama by examining Charles Franklin's chart of house effects for the disappprove category. Here the distinction between Rasmussen, Harris and Zogby -- the three pollsters that ask something other than the traditional two-category approval question -- is more pronounced.

2009-12-01_HouseFXDisapp.png

The Rasmussen experiment shows an even bigger discrepancy between the approve percentage on the two-category questions (50%) and the much lower percentage obtained by combining excellent and good (38%). This result is similar to what Chicago Tribune pollster Nick Panagakis found on a similar experiment conducted many years ago (as described in a post last year).

Variation in the don't know category also helps explain the house effects for many of the other pollsters. The table below shows average job approval ratings for President Obama by each pollster over the course of 2009 (through November 19). It shows that smaller don't know percentages tend to translate into larger disapproval percentages. With live interviewers and similar questions, the differences are usually explained by variations in interviewer procedures and training. Interviewers that push harder for an answer when the respondent is initially uncertain obtain results with smaller percentages in the don't know column.

2009-11-30_DK-percentage.png

3) The Automated Methodology - Much of the speculation about the differences involving Rasmussen and other automated pollsters centers on the automated mode itself (often referred to by the acronym IVR, for interactive voice response). Tom Jensen of PPP, a firm that also interviews with an automated method, offered one such theory earlier this year:

[P]eople are just more willing to say they don't like a politician to us than they are to a live interviewer because they don't feel any social pressure to be nice. That's resulted in us, Rasmussen, and Survey USA showing poorer approval numbers than most for a variety of politicians.

Other commentators offer a different theory, neatly summarized recently by John Sides, who speculates that since automated polls "generate lower response rates" than those using live interviewers, automated poll samples may "[skew] towards the kind of politically engaged citizens who are more likely to think and act as partisan[s] or ideologues," even after weighting to correct demographic imbalances.

A lack of data makes evaluating this theory very difficult. Few pollsters routinely release response rate data (and even then, technical differences in how those rates are computed makes comparisons across modes challenging). And, as far as I know, no one has attempted a randomized controlled experiment to test Jensen's "social pressure" theory applied to job approval ratings.

But that said, it is intriguing that the bottom five pollsters on Franklin's chart of estimated house effects on the approval rating all collect their data using surveys administered without live interviewers: Rasmussen and PPP use the automated telephone methodology and Harris, Zogby and You/Gov Polimetrix survey over the Internet (using non-probability panel samples). Of course, with the exception of YouGov/Polimetrix, these firms also either interview likely or registered voters, use a different question than other pollsters, or both.

As such, it is next to impossible to disentangle these three competing explanations for why the Rasmussen polls produce a lower than average job approval score for President Obama, although we can make the strongest case for the first two.

P.S.: For further reading, we have posted on the differences between Rasmussen and other pollsters in slightly different contexts here, here and here and on my old MysteryPollster blog here, here and here. Also be sure to read Scott Rasmussen's answer last week to my question about how they select likely voters. Finally, Charles Franklin posted side-by-side charts showing the Obama job approval house effects for each pollster last week; he has posted similar house effect charts on house effects on the 2008 horse race polls here, here, here and here.