A week or so ago, my colleague Marc Ambinder (anchor of the new Atlantic Politics Channel), did a series of blog posts on some privately commissioned polling on the subject of the so-called "card check" bill, or more formally, the proposed Employee Free Choice Act (EFCA). It is a great example of two big lessons we ought to remember when considering this sort of "hired gun" polling data.
Ambinder started with a post that contrasts questions from two pollsters working for opposite sides of the Card Check debate. First up was the a question asked by Peter D. Hart Research Associates on behalf of the AFL-CIO:
[Do you favor or oppose legislation that] Allows employees to have a union once a majority of employees in a workplace sign authorization cards indicating they want to form a union. 75% favor.
Next, Ambinder presented a different question asked by pollster John McLaughlin on behalf of the anti-EFCA organization Coalition for a Democratic Workplace (CDW):
There is a bill in Congress called the Employee Free Choice Act which would effectively replace a federally supervised secret ballot election with a process that requires a majority of workers to simply sign a card to authorize organizing a union and the workers' signatures would be made public to their employer, the union organizers and their co-workers. Do you support or oppose Congress passing this legislation? 15% favor, 74% oppose.
Ambinder followed up with a three-part exchange between anti-EFCA consultant Mike Murphy and pro-EFCA pollster Guy Molyneux. The short version: Our poll was "more accurate," no your poll was "outrageously biased," no wait,
let's let Mikey try it ** let"s test Ambinder's language.
Who was right? Which question is best (or more "accurate" or less "biased")? My answer: Neither. Or both.
The big challenge with this sort of issue, as Ambinder puts it in his first post, is that most Americans "don't know what EFCA is, or what 'card check' would mean." So any question that begins by describing the provisions in the bill does not test pre-existing opinions for the vast majority of Americans. Instead, such questions test the way Americans react to new information. In that sense, they provide a gauge of "public opinion" only in a very hypothetical way. They tell us what public opinion might be if all Americans knew was "[fill in the blank]."
These questions can be useful because attitudes about public issues can change as they get a high profile debate in Congress. Assuming that EFCA comes to a vote in the coming months, more Americans will learn about it and form new impressions. It can be helpful to try to preview how they might react and how different "framings" of the debate can shape reactions. Pollsters like McLaughlin and Molyneux get hired by clients who want to do just that, frame the debate and help sway public opinion to their point of view (and full disclosure to those just tuning in: I earned my living for many years as just such a pollster).
The warning label that ought to go on results is that they can be, as Molyneux argues, "very sensitive to question wording" without a single "'correct' way to ask the question." The inevitable arguments about which question is most "right" usually mirror the larger substantive debate. So, if you are a partisan on EFCA and you have little trouble choosing the "correct" version of the questions above, you should also know this: Your ability to see "the truth" on this issue does not lead to the conclusion that "public opinion" is on your side. It might be someday, but only if your "framing" wins the day and shapes the way most Americans (and not just policy-makers and very well informed Americans) learn about the issue.
In this case, the side-by-side comparison tells us -- indirectly -- that most Americans are unfamiliar with EFCA and that very few have real, pre-existing opinions about it. Also, we learn that both sides have arguments that are potentially very persuasive. So they are useful, even if testing something hypothetical.
Two big lessons: First, we need to be especially cautious about interpreting interest group sponsored results when we only have only one side's poll and cannot do a side-by-side examination of surveys sponsored by opposing interests.
Second and even more important, we need to better distinguish between questions that measure pre-existing opinions and those that measure reactions.
I have more to say about that second lesson, and hope to come back to the topic later this week.
**The explanatory link for those of you too young to get the reference.