Wrong Again: Why Experts' Predictions Fail, Especially About the Future

01/05/2012 12:03 am ET | Updated Mar 05, 2012
  • Michael Shermer Founding publisher, Skeptic magazine; editor,

With the first week of 2012 upon us it is time to reflect on that favorite pastime of soothsayers and scientists alike: predicting the future. At Skeptic magazine we routinely publish articles about the failed predictions of soothsayers, astrologers, tarot-card readers, palm-readers, and psychics of all stripes. But frankly scientists are not much better, especially in the social sciences where we depend on predictions of psychologists, sociologists, and most notably economists.

In his Newsweek column for September 19, 1966, the economist Paul Samuelson lamented that when it comes to predicting Gross National Product, "commentators quote economic studies alleging that market downturns predicted four out of the last five recessions. That is an understatement. Wall Street indexes predicted nine out of the last five recessions!" Economists' prediction track record has only gotten worse since, along with the prediction-to-outcome ratio, which I have seen at 10:3, 7:2, and even 9:0 -- yes, economists predicted 9 of the last 0 recessions. With such bestsellers as The Great Depression of 1990 and Dow 36,000, economic forecasters have proven themselves time and again to indistinguishable from astrological soothsayers. But they are not alone.

Historians are among the most knowledgeable scholars of the past so you might think that they would be especially good at finding patterns and predicting trends. In fact, one of the smartest and most deeply read historians of the 20th century, Arnold Toynbee, was spectacularly wrong in his blockbuster A Study of History, in which he thought he had identified a challenge-and-response cyclical pattern that all civilizations follow: birth, growth, expansion, empire, and disintegration. Starting with Greece and Rome, Toynbee dug through the historical record to find confirmatory evidence for his theory (culminating in his call for America to rise to the challenge of its alleged mid-century moral decline). You would think he would have taken heed from his inspiration, the German historian Oswald Spengler, who erroneously predicted the "decline of the west" in the 1920s. But that's not how the mind works.

Why did Toynbee believe his own theory in the teeth of contradictory evidence presented by other historians? Because of the confirmation bias, which our brains employ to reinforce what we already believe while ignoring disconfirming data. Everyone does it, which is why the scientific method includes hypothesis testing, blind and double-blind controls, results replication, statistical tests for significant (or not) differences between experimental groups, and other techniques specifically designed to attenuate the nefarious effects of belief bias. Toynbee could have saved himself decades of futile efforts and fruitless predictions had he simply employed a dozen professional historians to test his hypothesis by having them each individually go through the same data set that he used and independently classify each civilization according to whatever patterns they think that they could find, and then run an inter-rater reliability correlation between the historians' ratings to see if there is any statistically significant consistency.

According to the investigative journalist Dan Gardner in his 2010 book Future Babble (McClelland and Stewart) and the University of Pennsylvania psychologist Philip E. Tetlock in his 2005 scholarly masterpiece Expert Political Judgment (Princeton University Press, 2005), such cognitive biases are pervasive for both liberals and conservatives, optimists and pessimists, well educated or not, and well informed or not. After testing 284 experts in political science, economics, history, and journalism in a staggering 27,450 predictions about the future, Tetlock concluded that they did little better than "a dart-throwing chimpanzee." There was one significant difference, however, and that was cognitive style: "fox" versus "hedgehog."

Foxes know many things while hedgehogs know one big thing. Being deeply knowledgeable on one subject narrows one's focus and increases confidence, but it also blurs dissenting views until they are no longer visible, thereby transforming data collection into bias confirmation and morphing self-deception into self-assurance. The world is a messy, complex, and contingent place with countless intervening variables and confounding factors, which foxes are comfortable with but hedgehogs are not. Low scorers in Tetlock's study were "thinkers who 'know one big thing,' aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who 'do not get it,' and express considerable confidence that they are already pretty proficient forecasters." By contrast, says Tetlock, high scorers were "thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible 'ad hocery' that require sticking together diverse sources of information, and are rather diffident about their own forecasting prowess."

So as 2012 unfolds, most notably with predictions about political elections, beware of the experts on CBS, NBC, ABC, Fox, CNN, and even here at Huffington Post. For the most part these experts are no better than dart-throwing chimps. By contrast, follow the electronic markets that employ the wisdom of the crowd, such as, whose track record predicting election outcomes far surpasses that of any of the aforementioned sources. Remember this prediction in the months to come: InTrade has Mitt Romney taking the Republican nomination at 79.7% but losing to Barack Obama in the general election by 51.5%.

As with all predictions, time will tell...