A Fairness Doctrine For Academia

A Fairness Doctrine For Academia
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

An essay excerpted from How Professors Think: Inside the Curious World of Academic Judgment (Harvard University Press).

In American higher education, excellence, merit, and quality are often captured by quantitative measures such as GRE and SAT scores (if you are a student) or number of citations (if you are a researcher). But when it comes to evaluating the proposed work of academics across disciplines, simple measures like these usually will not do. Instead, scholars are brought together to deliberate the merits of these proposals. As historians, political scientists, anthropologists, and literary scholars weigh in with their particular expertise, they also learn from one another, improvise, opine, convince, and attempt to balance competing standards. They strategize, high-ball, and follow scripts shaped by their academic disciplines, but they also contextualize and compromise. They respect alternative perspectives and expect reciprocity. They try to impress other panelists, save face, and help others do the same. They set the agenda, flex their muscles, see if they measure up, and enjoy intellectual barter. They invest themselves in decisions and share excitement with others. They reach "good enough" solutions instead of ideal ones, because they have to get the job done in the time allotted. They go home usually feeling that they have risen to the occasion, betraying neither "the system" nor themselves. They have stood for principles, but not so rigidly that they could not reach consensus. For them, panels are an opportunity to be influential, and to be appreciated.

Within practical constraints, panelists aim to "produce the sacred" of fair evaluation, while respecting institutional, disciplinary, and other diversities. In particular, disciplinary cultures are tempered by the exigencies of multidisciplinary evaluation. Evaluators aim for consistency in standards across disciplines even as they use standards appropriate to the discipline of the applicant. They both engage in consensual and egalitarian decision making and defer to expertise. In addition, evaluators attempt to balance meritocracy and diversity, seeing these as complementary ideals, not alternatives.

The panelists' experience reflects many of the system's tensions, and the doubts these tensions create. Just how biased is academia? Do people get what they deserve? Am I getting what I deserve? Their collective evaluation mobilizes and intertwines emotions, self-interest, and expertise. Moreover, it requires coordinating actions and judgments through a culture of evaluation that has been established long before the panelists set foot in the deliberative chambers.

This story is fundamentally about fairness and the attempt to achieve it. What is presented as expertise may sometimes be merely preference ("taste"), described in depersonalized language. The reciprocal recognition of authority is central to the process, but it may lead to explicit horse-trading, which produces suboptimal results. Despite these potential hazards, however, panelists think the process works, in part because they adopt a pragmatic conception of "truth" (or at least of what constitutes a "fair evaluation") as something inevitably provisional and defined by the best standards of the community at the time.1 Indeed, the constraints on the evaluative process--particularly the considerable time that panelists spend preparing for deliberations and their dedication to convincing their peers of the merit of their point of view--go a long way toward creating the conditions for a more meritocratic system. The performative effects of positing a meritocratic system are comparable to those of having "faith in the market": the belief creates the conditions of its own existence--within limits.2

Some academics have a propensity to assume that quality is intrinsic to the work and that some scholars have a natural talent for finding it. But in fact the "cream" does not rise naturally to the top, nor is it "dug out" in unlikely places: it is produced through expert interaction, with the material provided by applicants. Neither the work nor the people are socially disembedded. Panelists' definitions of excellence are rooted and arise from their networks of colleagues and ideas. They aim for fairness, but the taken-for-granted aspects of social life--the cognitive structures they use routinely, the multiple networks of which they are a part--may lead them to assume that what appeals to them is simply best.

So evaluation is contextual and relational, and the universe of comparables is constantly shifting. Proposals demand varied standards, because they shine under different lights. In some cases, the significance of the proposed work is determined by the likely generalizability of its findings. In others, how a topic informs our understanding of broader processes is more important. In yet other cases, significance is assessed by the deeper understanding that results from a particular interpretation. In panel deliberations, the ideal of a consistent or universalist mode of evaluation is continually confronted with the reality that different proposals require a plurality of assessment strategies.

This plurality manifests itself starkly when disciplinary evaluative cultures are exposed in the kinds of arguments that individual panelists make for or against proposals, and in how these arguments are received--factors that together influence which proposals will be funded. In evaluating excellence, formal and informal criteria of evaluation are weighed differently by humanists, social scientists, and historians. Yet across fields, excellence is viewed as a moral as well as a technical accomplishment. It is thought to be a result of determination and hard work, humility, authenticity, and audacity. Other "evanescent qualities" count too, even if, as is the case for elegance and the display of cultural capital, they run counter to the meritocratic ideal that animates the system.

The self-concept of evaluators is central to the process of assessment, especially to the perception that the decision making is fair. Panelists evaluate one another as they evaluate proposals. Their respect for customary rules sustains their identity as experts and as fair and broadminded academics, who as such deserve to serve on funding panels. Yet if their self-concept orients knowledge production and evaluation, panelists downplay its role, often viewing it as an extraneous and corrupting influence.3

For most panelists, interdisciplinarity and diversity are aspects of excellence, not alternatives to it. Because there is a lack of agreement on the standards of evaluation for interdisciplinary genres, panelists readily fall back on the tools they have available--existing disciplinary standards--to determine what interdisciplinary research should be funded. While debates about diversity in higher education have focused mostly on gender and racial and ethnic diversity, the scholars I talked to were most concerned with institutional and disciplinary diversity. They see diversity as a good that can lead to a richer academic life for all and to a broader production of talent for society as a whole.

Like the panelists in my study, many American academics take for granted the legitimacy of the peer review system. Yet estimates of the fairness of the meritocratic process may ebb and flow with one's own academic successes and level of ambition. Empirical research should establish whether outsiders have the least faith. The peer review process is deeply influenced by who gets asked to serve as a panelist and what viewpoints and intellectual habitus those individuals bring to the table. Biases are unavoidable. In particular, program officers tend to extend invitations to the most collegial (who may be the least objectionable and most conventional) scholars and those whose careers are already established. Thus peer review is perceived to be biased against daring and innovative research--an explanation I have often consoled myself with when my own research proposals have been denied funding. And it is true that, at the end of the day, we cannot know for sure whether the "cream rises." But if panelists believe that it does, and make considerable sacrifices to do a good job, they contribute to sustaining a relatively meritocratic system. A system where cynicism prevailed at all levels would most likely generate much greater arbitrariness, and would result in less care being put into the decision-making process and into the preparation of applications.

__________________________________________
NOTES

7. Implications in the United States and Abroad
1. This is much in line with the concept of truth in James (1911).
2. MacKenzie and Millo (2003); also Dobbin (1994).
3. See, e.g., Shapin (1994) and Daston and Galison (2007).

Popular in the Community

Close

What's Hot