Click here to read an original op-ed from the TED speaker who inspired this post and watch the TEDTalk below.
"Science is a way of trying not to fool yourself." -- Richard Feynman
The dialectic between belief and truth is a tender bind. Using the snake in the grass example, Michael Shermer captured the effect of a low cost inferential error -- believing a pattern when it is not true -- versus a high cost inferential error -- not believing a pattern when it is true. It is easy to understand why we might believe scanty evidence of snakes in the grass. And Shermer's just-so-story (with evidence from mechanistic underpinnings) is a great hypothesis about why a correlated pattern makes us jump to the false conclusion of the worse case scenario.
But, sometimes the ratio of cost to benefit in inferential errors is reversed. Sometimes the cost of believing something is true when it is false (being a "true believer") is gravely expensive compared to the cost of a believing something false when it is true (being a skeptic). What's the harm if you believe in God, when in fact, there is no god? And, what's the harm in believing there is no god if in fact God exists? That's a metaphysical tender bind.
I like to think about more empirically and theoretically tractable tender binds of belief and truth like those I encounter in everyday science. What's the harm of "true belief" in a scientific study when its method was incapable of answering the questions it set out to answer? Skeptical individuals may often challenge true studies -- to their peril, but sometimes hit big when they are skeptical of a false study.
In science we value skepticism as a mechanism of "baloney detection." Skepticism helps keep scientists from fooling ourselves, and we often, but not always, value the skeptics among us, because they are often the visionaries who think up alternative hypotheses, design better controls, and repeat studies.
For the last few months, I have been waving "... a candle against the dark" of a single influential experimental canonical study of sexual selection. I wrote about investigators unable to accept their own observations that went against consensus views. I blogged about how "true belief" promoted biased inferences from methods incapable of answering the authors' questions, about readers' failures to catch the evidence in the original that the data were fatally flawed, about the insights from a modern repetition, and the value of control experiments. I wrote about how alternative hypotheses, repetition, and controls -- the memes of science -- liberate the truth from the lie in belief.
How much did the false "true belief" cost? I estimate that the costs were incalculably high. What we will never know are the studies that might have been if the dominating conclusions had not held sway for so long. What hypotheses might we have pursued sooner? How much more would we know now?
I reckon that what happened to skepticism is explained by Robert Trivers' hypothesis for the evolution of self-deception that assumes a co-evolutionary dynamic between liars and those they deceive. If liars lie to gain advantages at the expense of others, the deceived would be under selection to detect the lie favoring the evolution of lie detectors who detect the subtle signs that liars -- who know they are lying -- give off. An increase in lie detecting individuals in turn produces counter-selection favoring liars who better hide their lies, and so on until liars who do not know they are lying evolve. Liars who do not know they are lying show no signs of lying as they deceive themselves too.
Did the original investigator lie? Was he self-deceived? It's impossible to know. The author is long dead, so I feel most comfortable concluding he was self-deceived and truly believed his flawed data could answer his questions. But, what did he get from his self-deceit? Had formal peer reviewers detected the flaw, no harm would have been done. They would just have assumed an honest mistake by a sincere, hard-working, perhaps naive neophyte and the paper probably would not have been published. Passing peer review, the author gained a benefit -- "another paper published" -- and it cost the truth dearly. But, what did the reviewers and the author's contemporaries get from their "true belief"? The result was intuitively satisfying to many readers as it fit the status quo expectation, the dominating world-view of sex differences. It wasn't as though detecting the evidence of flaws in the data was difficult: anyone with knowledge of Mendel's rules could have done it. Yet, even the great geneticist R. A. Fisher who was capable of detecting the flaws in the data did not, but rather seemed to like the conclusions. I suspect Fisher didn't read the paper carefully. Or maybe he depended on the peer reviewers. Or, maybe he was enamored with the paper's justification for why the sexes are "as they are". Or, maybe it was the more prosaic benefit of more time for forging the NeoSynthesis.
What did all those subsequent readers get? I do not know. However, it must have been something pretty valuable, because for a long time truth lost. The convenient lie of a status quo worldview won in the day.
Beware horse heads in rocks, cupids in clouds, and Jesus in the tortilla. Beware that secretly faithless friend, "the snake in the grass" of "seeing what you believe".
Ideas are not set in stone. When exposed to thoughtful people, they morph and adapt into their most potent form. TEDWeekends will highlight some of today's most intriguing ideas and allow them to develop in real time through your voice! Tweet #TEDWeekends to share your perspective or email tedweekends@hufﬁngtonpost.com to learn about future weekend's ideas to contribute as a writer.