The initial study found that the least informative media were two partisan cable news channels, Fox and MSNBC, which came out at the bottom of twelve sources tested. NPR and Jon Stewart's Daily Show came out on top as the most informative, making the schadenfreude all the more delicious for Fox-haters, and the twisting of the liberal knife-in-the-back all the more painful for Fox fans. But how did it come to that?
Respondents were asked to identify which, if any, news sources they had used in the past week. The same respondents were questioned about current political and economic events. Some questions were deliberately easy; others were hard. The study then looked at the relation between which news sources people used, and how well they could answer the questions. In modified geek-speak, the idea was to isolate the effects of individual news sources on the ability to answer questions about current events, controlling for all of the other news sources, as well as things that tend to predict political knowledge, such as partisanship, age and education.
Overall, Fox viewers were not better or worse than the average respondent at answering the questions. That said, and all salient variables being geekily controlled for, there was not merely a zero effect but a negative effect of Fox News on viewers' ability to answer the questions; meaning that Fox viewers would have done better had they been using almost any other news source, or no news source at all. Results for the similarly partisan MSNBC were... well, similar.
The big surprise was that news reports focused almost exclusively on Fox's last place showing, and that the reports went viral. Then the unexpected bonus was the number of Fox-defenders who sent emails and snail mails, left voice messages, and blogged intensely, making every sort of criticism, especially argumentum ad hominem. As a matter of course, the investigators had a variety of mental, moral, and physical deformities. The most predictable of these was that the researchers, being college professors, were by definition mindless, meming, bed-wetting liberals, who drew their conclusions first, and arranged the data accordingly, or gleefully over-interpreted the results to gratify their prejudices.
The most substantive critique was that some questions were ambiguous, and therefore skewed the results. Critics pointed to one question in particular which asked whether the Egyptian people had been successful in "bringing down their regime." Alert readers suggested that while Egyptians were successful in forcing President Mubarak from power, one could not definitively say that they had overthrown the regime, since the same military which secured Mubarak for many decades continued to run the country, and the same protesters who were fed up with Mubarak's rule continued to agitate for the military to relinquish its control.
The perspicacity of this observation was matched only by the peculiar number of people who made the argument, frequently in identical language. Still, the answer is that an ambiguous question answer should have an equivalent effect on many different kinds of media consumers. Bias arising from poor question construction should not be systematic, but random. Fox-defenders were unintentionally positing that the effects of ambiguous language, or an arbitrary correct answer category, would affect Fox viewers disproportionately to all other media consumers.
One can suppose that this is within the realm of possibility if Fox viewers were systematically being presented with, and paying attention to, more news about the Egyptian Spring than other citizens, and were more thoughtful about it, and thus more likely than others to all draw similar conclusions. As one enthusiast put it, "Fox viewers got it right, and you got it wrong. My advice is to leave international affairs to someone else, or start watching Fox."
But were this alternative hypothesis true, it should have applied uniquely to scores for the poorly constructed question. In fact, a "wrong" answer to the Egyptian question correlated strongly to "wrong" answers on the all the other questions.
Critics also made points about the size and scope of the sampled population; some suggested that because it was done in New Jersey the results could not be imputed to other states, much less to the entire nation. New Jersey is, after all, demographically different from many other states. It leans to the Democratic Party. More than one in four speak a language other than English at home. And it has an elevated percentage of people with graduate education.
Other critics suggested too few people were included in the study and too few questions employed. The study used five questions, primarily because space on the omnibus questionnaire was limited. The same poll included voter assessments of the president and of New Jersey's governor, as well as a series supported by the New Jersey Farm Bureau, an annual sponsor of the poll.
The answer to all these criticisms was to run the experiment again, this time using a national sample, increasing the number of questions, and doubling the number of interviews. A new, national study thus included eight questions; four domestic and four international. The N increased from 612 to 1185, though it must be said that this was not at all a matter, as some critics thought, of having enough cases in certain cells to compare them using conventional tests to pronounce their differences significant. It was never a matter of examining crosstabs.
What the researchers searched for were the marginal effects of exposure to one news medium compared to any other. Their figures represented expected, not observed, values and all were relative to a hypothetical construct of someone who had no recent news exposure. Of course, most people get news from multiple sources, but the effect of each source, or of no source, can be calculated using multinomial logistic regression. All results controlled for partisanship, age, education, and gender, so that conclusions were presented ceteris paribus.
The re-study produced the same result as the original, but attracted few headlines. The news aggregators had already had their fun.
Fox came out on the bottom, even below "no news exposure." NPR came out on top, along with The Daily Show. Responses to the question about Egypt, now rephrased to specifically name Mubarak, were no different. We concluded again that NPR is one of the "most informative news outlets," while "exposure to partisan sources, such as Fox and MSNBC, has a negative impact." But perhaps that latter phase was misleading.
We never said, nor meant to say, that Fox viewers are dumb -- or MSNBC viewers for that matter. They're no better or worse than the average respondents. Clearly, anyone who is dumb and watching TV was dumb when he or she sat down in front of the tube. Some news sources just don't help matters any.
Dan Cassino and Peter J. Woolley are professors of political science at Fairleigh Dickinson University in Madison, New Jersey. Cassino is Director of Experimental Research for the University's research group, PublicMind: Woolley is its founding Executive Director.
Follow Peter J. Woolley on Twitter: www.twitter.com/FDUpublicmind