The College Swimsuit Edition: Making Sense of the College Ranking Wars

Today, the real and virtual ink expended in elaborate rebuttals of the so-called higher education "swimsuit" editions that rank colleges and universities now matches the amount of ink expended in producing these editions.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Today, the real and virtual ink expended in elaborate rebuttals of the so-called higher education "swimsuit" editions that rank colleges and universities now matches the amount of ink expended in producing these editions.

Americans love to rate and rank almost anything. Magazine and newspaper editors understand that rankings can be very profitable. The ranking wars have now extended well beyond the releases of US News and World Report and the Princeton Review. They are an indelible part of the higher education landscape.

College and university presidents - often with the best intentions - rail against the findings, the methodology employed, and the simplistic conclusions. Which schools are "leftist," "jock," "good looking" or house "reefer madness"? But in the end, most universities participate and complete the surveys. Many use their favorable ranking findings to brand their institution.

Recently, the federal government entered the fray. Subjected to blistering criticism from the higher education community, the feds nevertheless created a "College Scorecard," essentially a massive data dump that provides useful "national norm" data. It offers information on how much students earn ten years after entering school, for example, as well as the percentage of first-generation students at a school and the percentage of students who repay at least some of the principal on their federal loans within three years.

Critics note that the findings track only federal aid recipients and do not account for the mission or differences in the populations served that affect a variety of outcomes including time to completed degree and how well the targeted admissions class is served by the institution, based on the real needs of students.

The Economist and the Brookings Institution are the two most recent contenders in the "swimsuit edition" battles. The Economist wanted to know "how a wide range of factors would affect the median earnings in 2011 of a college's former students." The Economist notes: "Its ratings are based on a simple, if debatable premise: the economic value of a university is equal to the gap between how much money its students subsequently earn, and how much they might have made had they studied elsewhere." Its researchers attempted to control for high paying majors like engineering, geographical location, and hot job markets, among other factors.

The Economist asked, effectively, which colleges produced the best results for students in a particular field by measuring the differential between expected earnings and median earnings.

The Brookings Institution took a different approach. It focused on the "value added" by colleges, using a broader college sample and a different set of variables by focusing on areas like the curriculum. As a result, engineering schools dominate the upper tier of their rankings while art and religious institutions populate the bottom groups.

By comparing the two sets of research findings, it is clear that outcomes are dependent upon what prejudices and research biases become part of the methodology.

It's not that the federal government and the magazines and journals supporting college ranking are wrong to do so. Theoretically, more information offers consumers a better choice. It's more that no one has developed a distinctive methodology that is simple yet nuanced enough to address consumer needs.

One fact is inescapable. The ranking debates will continue. They are profitable and perceived by many as a valuable public service. Yet the rankings also carry with them particular areas of emphases that at best tell only part of the story. Further, there are aspects of the research methodology that limit their utility, depending upon the study.

The compounding effect is to produce an uneducated, overwhelmed and confused consumer.

As a result, consumers - especially first-generation college students without access to reliable counseling - are subjected to uncoordinated, imperfect data dumps without context and meaning. The data are available for many key metrics but there is no "reliable source" explanation to build credibility and confidence in a rankings system.

It is time to leave politics, ideology, and prejudice at the door. The first questions must be: How can we best serve students and their families? What are the key metrics that permit them to make an informed decision?

American colleges and universities have an important role to play here. If US News is sophomoric and ineffective, we need to better understand first what metrics they do right. If the federal government presents data without context and completeness, what alternatives exist or can be drawn reliably from what federal officials provide? In short, how can American higher education - or at least groups of same-purpose, historically connected, similarly scaled colleges and universities - come together to explain what they do?

The current rating systems don't work. They do an especially poor job of incorporating mission, purpose, demographics, student type, and outcomes. American higher education has an image problem determined in part by its failure to combat weak data that have come to define it in the eyes of consumers.

Higher education leaders need to fix the problem now. Colleges must once again become the trusted "reliable source" in making their own case to the public.

Popular in the Community

Close

What's Hot