iOS app Android app More

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors
David C. Wilson

David C. Wilson

Posted: September 21, 2010 08:01 PM

It's not surprising there are so many polling organizations these days. The 24 hour news cycle demands fresh and exciting information to share with the public. Polls provide much of this drug. Yet, consumers of poll data should hold a healthy skepticism about the popular interpretations of poll numbers presented on television, in papers and online blogs, and on the radio.

When it comes to following proper ethical and methodological guidelines, polling organizations are not all created equal.

Unfortunately, during the election season, you are likely to find high quality survey work replaced by expediency and bias. This happens regardless of the partisan or ideological leanings of the pollsters.

One of the main sources of error in interpreting poll results is poor question wording. In surveys questions are supposed to be accurate measures of one's attitudes, opinions, beliefs, and behaviors; just like the scales in bathrooms accurately measure one's weight. They should at least be well written and easily understood by respondents, and they should not be biased toward a particular viewpoint.

As a professor in Delaware, I've been following the polls targeting the now infamous U.S. Senate election and other national issues. Delaware does not have regular polling, but when it does happen, it's important to do it right.

Take for example the most recent Fox News poll conducted by Pulse Opinion Research on September 18th in the state of Delaware. The interactive voice response (IVR) survey -- where a computerized voice asks the questions -- contacted 1,000 likely voters in the state (margin of error + 3%).

See the Poll's Results here (PDF)

There are [at least] three questionable "questions" on the brief poll that caught my attention. The first asked respondents about their preferences regarding government.

"All in all, would you rather have bigger government that provides more services or smaller government that provides fewer services?"

Approximately 35% said, "bigger government" providing more services", 53% said "smaller government" with fewer services, and 12% were unsure.

First, it's unclear what services respondents would prefer. Perhaps Delaware residents would prefer only military protection, social security, and Medicare services; likely the three largest increasing budgetary expenditures in the federal budget. Second, it's unclear what is meant by "bigger" or "smaller" government. The question could be referring to spending, the number of agencies, red tape and bureaucracy, or the "out-reach" of government into the private sphere. In general, the question is ambiguous and should be rewritten to be more specific.

I'm not a fan of the spirit of the question, but If Pulse wants to stay consistent with this line of wording, I'd suggest, "would you prefer a [federal] government that provides fewer services or one that provides more services."

Another question on the survey is "double-barreled." It asks the following:

"Do you agree or disagree with the following statement: the federal government has gotten totally out of control and threatens our basic liberties unless we clear house and commit to drastic change."

What? Let's count how many assertions are in this one statement: 1) the government has gotten out of control, 2) the government threatens our basic liberties, 3) unless we clean house, and 4) commit to drastic change. These are really four separate questions that can be asked of respondents. To put them as a single assertion to agree or disagree with is problematic. I won't even address the loaded language -- "control and threaten" -- in the question.

And, it's not clear what aspects of government are out of control; is it the budget, the over-reach of government, not listening to the people, the wars, or what? I'd suggest simply dropping this item and returning to the drawing board.

A third question presents inaccurate and illogical information, and is poorly worded. It asks respondents:

"Thinking about the health care law that was passed earlier this year, would you favor repealing the new law to keep it from going into effect, or would you oppose repealing the new law?"

First, many parts of the health care legislation have already gone into effect, so it seems impossible to "keep [the law] from going into effect." Thus, I take the item to suggest that there are specific parts of the law that should be repealed before they go into effect and that everything that's already in effect is okay; however, if I (or respondents) have to make assumptions about the question's intent, then it's a bad question.

Second, there are actually two parts to the first part of the statement, one about repealing the new laws and another about keeping them from going into effect. This is the double barrel problem all over again. Finally, the wording of the question is imbalanced. The statement places "favor" with "repealing the new laws to keep it from going into effect" and trivially tosses in the "oppose" option at the end. Why not simply ask, "do you favor or oppose repealing the new health care law?" Even this wording is too general, but it's much better than the status quo.

Questions about the 2009 health care legislation are notoriously bad because they don't speak to any specifics. What aspect of the legislation should be repealed? Is it the part about pre-existing conditions, the individual mandate, or the part about adult children being able to stay on their parent's health insurance? The public should be skeptical of health care generalities because people may disagree with the narrative of politicized health care legislation, but not with the specifics of it.

Finally, the poll asks respondents about the state of the economy in Delaware.

"Generally speaking how would you rate the condition of Delaware's economy... excellent, good, not so good, or poor?"

This seems like a clear cut question, but the topline response categories presented by Pulse raise some addition concerns. The topline report shows 1% saying "excellent," 31% saying "good," 50% saying "no so good," 16% saying "fair," and 2% saying "not sure." Where the heck did "fair" and "not sure" come from? They are not a part of the question. And, why use two closely related categories like "not so good" and "poor?" What's the difference? If I'm asking, respondents are likely asking also.

Pulse, as well as other polling organizations, and ideologues should not interpret these critiques as indictments of their pollsters; rather this type of criticism speaks to the very future of public opinion polling as an industry. Public opinion researchers and survey methodologists are finding it hard enough to do high quality work in a changing social world of cell phones and high geographic mobility.

The errors mentioned above are not difficult to resolve; but if the same wording comes out from the same polling organizations again, it would speak volumes about the agenda of the pollster or their client. The bottom line is that survey and polling researchers need to stay focused on good measurement, lest the polling industry become as politicized as many in the public currently view the media.

 

Follow David C. Wilson on Twitter: www.twitter.com/dcwilsonphd