As noted in the 'outliers' post, MyDD's Jonathan Singer flagged an odd inconsistency in a recent Rasmussen Reports survey of Minnesota. They asked likely voters to rate Republican Governor Tim Pawlenty on their standard approval scale (strongly approve, somewhat disapprove, somewhat disapprove or strongly disapprove) but then asked about Senator Al Franken using a different scale ( excellent, good, fair or poor).
As Singer points out, the excellent-good-fair-poor scale typically produces lower positive scores. He links to a Pollster.com guest post from Chicago Tribune pollster Nick Panakagis showing that "fair" sounds like a neutral category that appears to attract voters that might otherwise answer "somewhat approve."
"Why," asks Singer, "is Rasmussen using two different metrics -- one, which tends to find higher approval ratings, for the Republican; another, which tends to find lower approval ratings, for the Democrat?"
I put that question to the folks at Rasmussen Reports and received this response from a spokesperson:
It was a mistake that slipped through the cracks. The matter has been addressed internally with all involved.
We work with a local TV station that provides us with local knowledge. In exchange, they get first look at the data and Scott Rasmussen goes on air to discuss the results. In practical terms, this means they suggest questions and topics that are likely to be of interest in their state.
They do not commission the poll, it's a Rasmussen Reports poll and we are ultimately responsible for the questions.
We work with a standard template that includes the President's Job Approval rating and the Governor's. In this case, the station suggested a variety of topics ranging from politics to the Vikings playoff chances and Bret Favre. The station recommended the questions on the Senators and did so with the excellent, good, fair, poor rating. The person preparing the script noted (correctly) that this was an acceptable format we have used before in other surveys. However, they should have noted the inconsistency with the other approval questions and asked all in the same way. The editor reviewing the process also failed to pick up the inconsistency.
Credit to Rasmussen Reports for admitting a mistake, but they should also append this statement to the original analysis and post it separately so that those who saw (and linked to) the original numbers will see their explanation.