As I slid into a chair in a crowded seminar room next to my collaborator, he excitedly whispered, "No statistical sex difference in Bateman slopes, but the speaker says that his data on females are biased but not the male data. What's going on?" In other words, the speaker had admitted that half of his data failed to fit his conclusions, but he stuck to his guns anyway, much to my colleague's confusion.
Of course, I hadn't heard the talk so I couldn't comment. Yet it sounded suspiciously like the speaker was a true believer in the "Bateman principle," the idea that females gain nothing when mating with more than one male, while males get a big payout for so-called "promiscuity." What befuddled the speaker and excited my collaborator were "Bateman gradients," plots showing the slopes of the relationship between the number of mates and the number of offspring. Because of Bateman's principle, most expect the slopes to be a positively increasing curve for males, but not for females, meaning that it pays for males to mate with many females, but that the payout is unimpressive for females who mate with many males (aka the evolutionary justification for the double standard).
The principle comes from the most cited empirical study in sexual selection published in 1948 in the journal Heredity. Almost everyone is familiar with its conclusions: "Horny, aggressive, competitive males" and "choosy, relatively indifferent-to-mating, calm, dependent females" are what they are because of selection for genes in males to mate with as many females as possible and selection for genes in females to be coy, discriminating, even indifferent to mating. The cave-man rendition of the conclusions captures the two main mechanisms (female choice and male-male competition) of sexual selection (a type of Darwinian selection that results in the evolution of fancy traits of males). Assuming that females are limited in the number of offspring they can produce, while males are limited only by their access to females, the prediction goes like this: males with many mates win the Darwinian contests by gaining more offspring each time they mate with a new female, while females do not increase their number of offspring with each new mate.
Others in the audience didn't seem to notice any peculiarity in the speaker's conclusion. The speaker's data didn't fit with the generally accepted principle, and so his odd defensiveness about his own results was likely because he was satisfied that he had somehow managed to mess up his observations of an Australian garden bird, even though he had no idea how.
My collaborator was far less sanguine, "What's going on here? If the data are wrong for females, why aren't they wrong for males? The speaker's methods for each sex sounded the same to me."
Indeed, what was going on?
How did it happen that a scientist equivocated in the face of his own data?
I think there are two possibilities. His data tell the truth, but he believes in Bateman's principle. Or, he unintentionally screwed something up in his methods so his results are unreliable, as he claimed.
Scientists are experts at the scientific method. Most scientists do their jobs and control their studies against biases -- sometimes even their own (when they know what their own biases are) -- and interpret their studies in "disinterested" ways. Science is a self-correcting process, and the scientific method works so well because scientists are, well, "geeks." Many scientists are remarkably original thinkers with highly developed practical skills, like knowing calculus or possessing the rare ability to watch and record animal behavior (and for very long periods of time!), and scientists are enamored with experimental rigor, controls and the precision of measurements. In today's world, think of Nate Silver. To my eye, most of us who call ourselves scientists, including the befuddled speaker who exercised my collaborator, actually fit this bill.
However, scientists are human. So, I'm inclined to think the speaker is a true believer in Bateman's principles.
Yes, true believers include scientists, probably much more frequently than any of us imagine, for self-deception dogs scientists just as it dogs the rest of life. In fact, the plain-speaking Nobel laureate physicist Richard Feynman defined science this way: "Science is a way of trying not to fool yourself. The first principle is that you must not fool yourself, and you are the easiest person to fool."
Feynman didn't tell us why it was so easy for us to fool ourselves, but Crafoord prize winner Robert L. Trivers did with his revolutionary idea about the evolutionary dynamics for the origins of self-deception. His idea provides an answer to why self-deception can keep us from seeing the noses on our faces. Robert argued that because there is selection to detect lies there is counter-selection on liars to hide the telltale physiological and behavioral signs of lying, facts that favor the evolution of self-deception. The best liars are those who do not know they are lying.
In the dynamics of the sociology of science, there must be some payout for self-deceived true believers.
Our collective failure to see the noses on our faces in the biased interpretations of biased results in the paper that gave us Bateman's principles suggests that the payouts are wide-spread. Something is going on, but it's not exactly clear why this iconic study in sexual selection was left unreplicated for 60 years, only to be questioned seriously and finally replicated in the first decade of the 21st century.
Further reading: Trivers, R. L. 2011 Folly of Fools: The logic of deceit and self-deception in human life. Basic Books, New York, NY