01/30/2009 01:09 pm ET Updated Dec 06, 2017

"Manipulating" Public Opinion

My colleague, Mark Blumenthal, has recently posted his reaction to an earlier post of mine, in which I suggested that most major media pollsters deliberately manipulate public opinion, in order to make it appear as though most of the public has an opinion on an issue. My examples included polls about the stimulus package, and how the public viewed the Democrats' control of the three branches of government.


In his critical remarks, Mark suggested that I was being a bit narrow in my view of public opinion and unfair in implying a nefarious motive on the part of the pollsters. In a later blog on the same issue, he noted that he had received reactions from two different pollsters, who did not want to reveal their names, one who works on campaigns and the other who works for the media. They provided somewhat different takes on his discussion of public opinion about the stimulus package, takes which I think tend to support my criticisms of media polls.


More about that in a moment. First, let me say that I appreciate Mark's well-considered criticisms, recognizing that they probably also reflect the views of many other practicing media pollsters (though let's hope not all). And I appreciate the opportunity that Mark offers for me to blog on this site about these issues, because I think such conversations are at the heart of the scientific enterprise - as does he. For that I am grateful.


As to the flaws in my concept of public opinion, I think Mark may misunderstand my recent focus on the lack of "no opinion" measures. Like Mark, I think that is just one part of measuring public opinion, but still a crucial one. Mark seems to agree, writing that "Yes, it is important to understand that many Americans lack a specific opinion on the 'economic stimulus' legislation per se, something stressed by too few pollsters. Still, that finding is just one part of 'public opinion' on this issue." (emphasis added)


I couldn't agree more. My view is that at on important policy matters, pollsters should measure at least three dimensions of public opinion: 1) direction of support (from support to opposition), including the magnitude; 2) intensity of views; and 3) the absence of a meaningful view on the matter, or non-opinion. In my recently book, The Opinion Makers, I elaborate more fully on this concept. (For the time being, I will ignore the oft-neglected measure of intensity.)


If measuring the direction of public opinion and measuring non-opinion are both important, why don't we find both measures in most polls? Look at the graph below - these are the poll results that Mark assembled from the various polls in his critique of my commentary. All of them measure direction of opinion, as we would expect, but only one attempts to measure non-opinion (NBC/WSJ). (Mark suggests that Rasmussen may have provided an explicit "no opinion" option, but Rasmussen's topline with the actual question shows it was a forced-choice format, with "no opinion" a volunteered option.)


(The graph below takes the difference between the percentage of people who support and the percentage who oppose the stimulus package, as described in the respective polls, which is then plotted as the "margin in favor" - since all polls showed more people in favor than opposed to the stimulus. The graph also shows the percentage of people without an opinion, as reported by each poll.)


0901_30 Reply to Mark B on Manipulating public opinion (Econ stim diff polls).png Mark acknowledges that too few pollsters stress the percentage of people who lack an opinion (see italicized part of his quotation above), and his (and my) concerns are amply illustrated in the graph. Only 1 percent have no opinion according to CNN, just 3 percent say ABC/WP, 4 percent says Ipsos, 7 percent say NBC/WSJ, and 11 percent to 12 percent say Gallup and Hotline. Those are hardly credible numbers, if we are referring to a specific stimulus package (rather than to the general idea of some kind of stimulus), as all of these results do.


Indeed, Mark says the consensus of the two pollsters who contacted him in the wake of his critique was that "both imply agreement on one thing: Most Americans know little about the 'economic stimulus plan,' except that the President and the Congress are talking about it." Mark also adds at the end of his commentary that "'tepid' support is about the right phrase to use" to describe public opinion on the issue.


If that is the case (and I tend to agree with it), how did these pollsters arrive at that conclusion? Certainly not by looking at Gallup, Hotline, Ipsos, NBC/WSJ, CNN or ABC/WP. All those pollsters suggest very few people unsure about the specific stimulus plan being considered by Congress, and very large majorities in favor. None of these pollsters suggested "tepid" support and widespread ignorance.


Mark also writes that besides measuring non-opinion, "reactions to new information are also important, as are the underlying values driving responses to all of the questions reproduced above." Again, I don't disagree, though he implies that I do. It is not mutually exclusive to measure non-opinion (which most pollsters fail to do) and also to measure reactions to new information or underlying values.


The question is why do most pollsters fail to measure non-opinion?


(Separately, why do most pollsters fail to measure intensity? That's also an important dimension, but I'll talk about the failure to measure intensity at a later time.)


My response is that most pollsters don't measure non-opinion in general because they don't want to reveal that a sizeable number of Americans don't have an opinion on important policy matters. Mark thinks I'm being unfair, and that I'm attributing nefarious motives to such pollsters.


Here's the dilemma: Mark and I both agree that non-opinion is a crucial part of measuring the public's position on policy matters. We also know that all the major media pollsters do measure non-opinion from time to time. So, why don't they measure non-opinion on such major issues as the stimulus? What criteria do they use to determine when they will, and when they won't, ask questions with an explicit "no opinion" option?


The simple answer is that the news media wouldn't find it interesting to constantly report that large segments of the population don't have a meaningful opinion about the major policy issues facing the country. That's why pollsters feed respondents information and asked forced-choice questions, all in an effort to reinforce the myth of a highly informed, rational, and engaged public. It's a far more newsworthy myth than the reality of a public with many people uninformed and unengaged on issues, and thus lacking meaningful opinions.


I'm not arguing that all people fit that category. I'm only arguing that pollsters and the media should be willing to admit the existence, and measure the size, of such large segments of the public, instead of manipulating respondents to come up with answers so that it will appear as though virtually all Americans have a meaningful opinion.


In 1942, Elmo Roper wrote in an essay for Fortune magazine, titled "So the Blind Shall Not Lead," that even then, less than a decade since the advent of modern polling, "the emphasis in public opinion research has been largely misplaced. I believe its first duty is to explore the areas of public ignorance."[1]

Exploring areas of public ignorance may not necessarily be the pollsters' first duty, but it is certainly an important duty they usually fail to perform.

[1] Elmo Roper, "So the Blind Shall Not Lead," Fortune, 25, No. 2, p. 102, cited in George Bishop, The Illusion of Public Opinion: Fact and Artifact in American Public Opinion Polls (Lanham, Maryland: Rowman & Littlefield Publishers, Inc., 2005), p. 6.