How the U.S. News & World Report College Rankings Are Destroying Higher Education

08/30/2010 01:49 pm ET | Updated May 25, 2011

The most influential force shaping the priorities of higher education in America is the U.S. News & World Report, which ranks universities on their selectivity and the average SAT scores and high school grades of the incoming students. In other words, the universities are not ranked on what they actually do once the students get to the schools; instead, the institutions are rated on who attends the schools and how many people are excluded from attending. Universities and colleges thus have a perverse incentive to recruit students so that they can reject them and then raise the school's selectivity rating.

U.S. News & World Report also rates universities and colleges based on their reputation as reported by peer institutions. This part of the ranking system has been highly controversial, but what most people fail to notice is that this reputation rating does not try to assess the quality of education. In fact, all of the categories that U.S. News & World Report uses do not even attempt to judge what goes on in the classroom, and the main reason for this lack of analysis is that the institutions themselves do not have any shared method for judging the quality of faculty teaching or assessing the level of student learning. Since higher education institutions have not developed any accepted method of evaluating the effectiveness of undergraduate education, parents and students are forced to rely on ranking books that the universities and colleges themselves reject.

However, even if every school criticizes the validity of the U.S. News & World Report college rating system, these institutions still spend a great deal of money and time on trying to raise their rankings. In other words, a bad evaluation system is driving the decisions of many of our colleges and universities. For instance, in order to raise their selectivity rating, schools pour money into advertising and recruitment in order to make sure that many students apply. In fact, even the universities that reject the vast majority of interested students spend lavishly on trying to attract more students so they can reject the highest number of applicants.

Another key way that schools compete for the applications of incoming students is by showing off their great athletic centers, food courts, and other extracurricular activities. Once again, due to the lack of any accepted method of evaluating student learning, colleges and universities rely on non-educational aspects to attract and retain students. For example, when students and their parents go on school tours, most of the information given relates to non-educational topics like housing, parking, dining, fraternities, athletic facilities, and entertainment options, and when guides do provide information concerning educational activities, it is often false or misleading.

While one possible indicator of educational quality is the size of classes, the way the U.S. News & World Report presents class size is very misleading. For instance, many large universities claim that more than 50% of their classes have fewer than twenty students. However, a school may meet this requirement by having 100 classes of 500 students (serving 50,000 students) and 101 classes with 5 students (serving 505 students). In other words, it looks like a student at this school may have an even chance of getting into a small class, but in reality, they have one in a hundred chance of being in a class with less than twenty students; therefore, one can have an even number of classes in both categories, but only a tiny percentage of students will be able to take small classes.

Another tricky indicator of academic quality employed by U.S. News & World Report and other ranking systems is the percentage of faculty who are full-time. Not only does this indicator have no proven connection to educational quality, but the reported statistics are highly misleading. In fact, most of the top universities report that over 90% of their faculty are full-time, but they can only make this claim by not including graduate students and non-tenured instructors as faculty. If these universities did count everyone who actually teaches at the university, the number of full-time faculty teaching undergraduate courses would be closer to 35%. Moreover, this statistic does not account for the fact that many professors never teach undergraduate students, and nationally, many of the classes taught by professors at research universities are graduate courses. Likewise, U.S. News & World Report also uses a statistic concerning the student-to-faculty ratio, but this number does not examine whether these faculty members actually teach undergraduate students.

This brief examination of the U.S. News & World Report rankings shows that most--if not all--of the major categories are misleading, and they do not even try to account for student learning or the effectiveness of teachers. In other terms, parents and students make one of the biggest and most expensive decisions of their lives based on faulty and deceptive information. Moreover, universities and college often criticize these rankings as they pour money into trying to improve their positions. It is clear that we need a new system of ranking and ratings, and if the higher education institutions do not begin to asses the quality of their instruction in a clear and transparent manner, some outside entity, like the federal government, will require a standardized approach to judging the quality of instruction and the learning of students.