Stephen J. Ceci and Wendy M. Williams
In Part 3 of this series, we respond to more comments about our April 13, 2015 article in Proceedings of the National Academy of Sciences. (link)
We studied the wrong question
The bane of every researcher is the critic who says a study is actually about some question it was never designed to answer -- then proceeds to demean the study for failing to answer the question it never asked. Critics chided us for not asking different questions than the one we did ask: Instead of studying hiring bias, they argued, we should have examined bias in tenure, remuneration, bullying, chilly climate, and persistence. (link and link)
We agree that bias independent of hiring is important. In 2014, we and our colleagues published a 47,000-word monograph, describing hundreds of analyses examining claims of gender bias both before and after hiring--in pay, promotion, persistence, and job satisfaction, in eight STEM fields. (link)
In the current research, however, we studied entry-level hiring which, when coupled with this prior work, illuminates why women are underrepresented in certain fields of science but not others. We wrote: "Although the point of entry into the professoriate is just one step in female faculty's journey at which gender bias can occur, it is an extremely important one." (link) We believe hiring is a very reasonable career point to address in view of our work on other aspects of bias.
Despite the claim that hiring is biased against women, we found no evidence of this for fields in which women are most underrepresented -- engineering, computer science, physics, mathematics, and economics. After our findings were published, a chorus of critics asserted that everyone already knew this and that bias against women occurs at time points other than hiring. This assertion is a denial of the numerous allegations of hiring bias, some of which we cited. (Readers can search "Wenneras and Wold hiring bias against women" for many more examples.) For example, onebadbint said: "As to whether there is gender discrimination in hiring in academia and elsewhere, come on! This is well documented in Virginia Valian's book Why So Slow which reviews the many many studies which show how changing the gender on an application gets fewer offers, lower salaries, and so on." (link)
Some critics feel we should have studied the entire professional career of academic scientists: "W&C focus specifically on hiring for Assistant-level tenure-track positions. Their data don't tell us about hiring for other kinds of positions (lectureships, senior hires, etc.), nor do the data tell us about promotion decisions, publication biases, salary issues, micro-aggressions, chilly atmospheres, and, of course, explicit harassment." (link) "The study only discusses tenure-track hires, not work environment (which qualitative work still suggests might be unwelcoming for women in STEM), senior appointments, mentoring etc." (link)
We and our coauthors from economics have published our findings on gender differences in salary, tenure, promotion, job satisfaction, persistence, etc. Thus, it is unclear why critics assail us for not studying these issues again. Our current research on the hiring process is one piece of the larger puzzle, albeit an important one in view of claims of bias. No amount of retrofitting by critics can deny that hiring bias was alleged to be commonplace before we presented findings to the contrary, and no one in touch with current work in the field can deny that we have published extensive analyses of gender differences in multiple aspects of academic careers.
The results from our current five experiments (and multiple validation studies) converged on the conclusion that women are preferred in tenure-track STEM hiring--but not because they are stronger than men, as some have opined. Predictably, some people do not welcome our findings. But they are real and they match what happens in real-world hiring (as described in the large-scale audit data we cite in our article, which shows that when women apply for real-world assistant professorships, they are usually more likely to be hired than men). Because these real-world data were not experimental, it was argued that women's hiring advantage was due to women being stronger candidates: "Perhaps the women who survive training in a field where they have few mentors and surmount barriers most men may have little knowledge of, might actually be better. At least we cannot assume they aren't." (link)
This common assertion is precisely why we did the current experiments--to test the claim of female superiority as being the driver of female hiring advantage. We made the male and female candidates equally strong, and women were still preferred by faculty. So it is unlikely that female superiority is the reason women are hired at a higher rate in the real-world.
Self-selection bias in our data
A number of posters and bloggers have taken us to task for what they perceive as a self-selection bias. In the words of Dr. Zuleyka Zevallos, an applied sociologist: "Williams and Ceci say they have addressed self-selection bias of their sample by conducting two control experiments...In effect the study design does not simulate the conditions in which hiring decisions are made. Instead, participants self-selected to participate in a study knowing they'd be judging hypothetical candidates." (link)
However, our control for self-selection went far beyond studies she cites as counterevidence. We began by reporting that demographics (gender, type of institution, and discipline) were similar between respondents and nonrespondents. The blogger never mentions this check on self-selection. Next, as this blogger notes but criticizes as a weak selection check, we paid a subsample of 90 psychologists and 82 of them complied (91% response rate); their data resembled in every way the data from psychologists in the main sample who were not paid, thus suggesting that as one approaches total population parameters the preference for female applicants remains strong. Adding this step can only strengthen our study, notwithstanding unsupported claims that psychologists could infer the purpose of the study.
Next, we went on to provide a third self-selection check the blogger also fails to note: We used national polling techniques to rule out effects of self selection. (Interested readers should consult the Supporting Information online, which describes the sample weighting procedure we used to ensure the generalizability of findings.) Our sample-weighting procedure involved adjusting for the responses and nonresponses from 2,100 faculty solicited to be in our experiments. We tallied the women and men in each of the home departments of these 2,100 faculty and then calculated sample weights based on these counts (e.g., a woman sampled from an engineering faculty of 50, 10 of whom were female, would have her voting weight adjusted both for how many women and for how many total people were in her department, as well as for how many departments of its type were in the U.S. as a whole). These sample weights allowed us to examine the effect of nonresponding individuals in each of the 24 strata: 2 genders x 4 fields x 3 types of institutions. If self-selection was a factor, it would have changed the outcome when the analyses were re-run using sample weights to control for nonresponse. But the results did not change. The irony of this criticism about our alleged failure to check on self-selection is that we went far beyond any previous experiment on gender bias in hiring, by using national-polling-type sample weights to control for response and nonresponse rates in each subgroup of our sample.
In Part 4 of our response to critics we will examine allegations that we (a) have a right-wing agenda to overturn affirmative action, (b) control the journal that published our findings, (c) escaped the normal peer review process, and (d) sifted through comments, responding only to those made by men.
The Huffington Post’s Weird News email delivers unbelievably strange, yet absolutely true news once a week straight to your inbox. Learn more