In his post on "Hired Gun" Polling, Mark Blumenthal suggests the need for pollsters to "distinguish between questions that measure pre-existing opinions and those that measure reactions."
He makes an important point. Much of what pollsters offer to the world as "public opinion" is in reality hypothetical, based on giving respondents information that many in the general public may not have and then immediately asking respondents for their reaction to that information.
Such results can be illuminating, but pollsters recognize that feeding respondents information means the sample no longer represents the American public and what Mark calls its "pre-existing" opinion. Unfortunately, many pollsters fail to acknowledge the hypothetical nature of such results, and instead treat them as though they represent the current state of the public's views.
The problem with this kind of approach is illustrated in the case that Mark discussed in his post, dealing with the "card check" bill, the proposed Employee Free Choice Act (EFCA) concerning the authorization of unions in the workplace.
The vast majority of Americans, one can reasonably assume, have little to no knowledge of the provisions of the bill. Thus, to measure "public opinion" on the issue, pollsters feel they need to tell respondents what the bill is all about. A Republican pollster explained the bill one way, a Democratic pollster another way, and - to no one's surprise - they ended up with a "public opinion" that reflected their respective party's position on the issue.
While one may argue the relative merits of the questions used by the two pollsters, the main point is that informing the public of any major policy proposal is intrinsically biased. Pollsters have to decide what is important among all the various elements of the proposal, and they can often come up with quite different conclusions. This problem applies to public policy pollsters as well, who - we can reasonably assume - have no partisan agenda, but who nevertheless can produce what appear to be partisan results.
Such problems have multiplied with the recent public policy polling on the bailout proposals for Wall Street and for the auto industry, and on the stimulus plan being considered by Congress. Most pollsters assume the public has little specific knowledge of such proposals, and thus pollsters provide respondents specific information to measure the public's (hypothetical) reaction to the proposal.
When CNN described the proposal to bailout the auto industry by characterizing it as "loans in order to prevent them from going into bankruptcy," in exchange for which the companies would produce plans "that show how they would become viable businesses in the long run," it found a 26-point margin in favor (63 percent to 37 percent). But when an ABC/Washington Post poll only a few days earlier had mentioned the word "bailout" in its question and did not refer to plans leading to the companies becoming viable, the poll showed a 13-point majority against the proposal (55 percent to 42 percent).
Again, one can debate the relative merits of the two questions, but the tendency of pollsters is to say that each set of results provides different insights into the dynamics of the public's views on this matter. In short, each provides a picture of potential public reaction to the proposal, if the proposal is framed to the general public in the way each polling organization presented the issue to its respondents.
That distinction is generally lost in the news reports. Each polling organization instead announces its results as though they reflect the current views of the public, over which the polling organization had no influence. But the reality is that the polling organization inevitably shapes its results by the very way it presents the issue to respondents.
As Mark argues, such "reaction" polling has a useful role to play in the public discourse on public opinion. However, it's also important that pollsters make clear that their results do not reflect "pre-existing" opinion (opinion before the polls were conducted - though one might instead use the word "extant" opinion), but rather hypothetical opinion under restricted conditions.
It used to be that newspapers made a formal distinction between "hard news" articles and "analysis" articles - clearly labeling the latter as such. That procedure doesn't seem to be followed these days, but it may be a useful analogous model for pollsters. Perhaps, in a similar way, pollsters can devise a method to formally separate their reports of potential "reaction" public opinion from existing public opinion.
I can envision, for example, one article in a newspaper that describes how many (or how few) people are actually aware of an issue and how many express ambivalence about the matter, while another article could explicitly describe how the public might react if the issue were universally framed in one way or another. Pollsters have made such distinctions sporadically, which is why we know that the public is more likely to support "rescuing" the auto industry than "bailing it out."
Still, the need for a formal and widely accepted method of distinguishing "reaction" questions from those that measure existing opinion needs to be found, if pollsters are to avoid the confusion that occurs when highly reputable polls produce wildly contradictory results.