In last week's column, I made the point that while there are a number of positive aspects of President Obama's proposed plan to reform higher education, the most important part of the plan (and the part that does not require Congressional approval and thus stands a chance of being implemented) could potentially do more harm than good if it is not carefully and judiciously thought out. I'm speaking of the proposal to create a college rating system that would then be used in determining how student aid gets distributed among the various colleges and universities.
I discussed how two criteria that have been suggested as likely to be part of such a rating system -- tracking graduation rates, and tracking how much students from various institutions earn in the year after they graduate -- can be misleading. I would like to examine in more depth why the graduation rate can be a misleading criterion.
While it is entirely reasonable to expect college students not only to graduate but to do so in a timely manner, an unnuanced method of accounting for these rates can produce an inaccurate representation of an institution.
The principal problem is that many of the most recent calls for holding institutions accountable for college affordability have also called for keeping track of how many students graduate within four years. This is the fatal flaw.
Following the federal government's lead, colleges and universities have been traditionally measured by their six-year (not four-year) graduation rate. Why? Because even the U.S. Department of Education's main repository of federal data on colleges, the Integrated Postsecondary Education Data System (IPEDS), has attempted to account for the vast diversity of today's college population as well as the vast variety of types of programs, some of which are meant to take longer than four years but might result not only in a bachelor's degree but a master's as well.
While it might have been reasonable to expect college students in the 1950s or 1960s to graduate in four years, the present-day population bears no resemblance to their predecessors. The earlier generation of students was predominately, white, male, and from families that were, if not affluent, certainly not poor. Although many students in those years were the first in their families to attend college, most were from families with a long tradition of attending college -- a huge advantage in a student's potential academic success.
In other words, these students were relatively privileged, economically and socially, and were well positioned to complete college rapidly. If you have little or no economic pressure to work while attending college, if you have the advantage of family members who can explain the ins and outs of college life so you don't have to reinvent the wheel, if you have the luxury of transitioning directly into college from high school without having to take some years off first to work fulltime, you are much more likely to complete college in four years than are your compatriots who are not as privileged.
As a population, the college students of today are much different: women outnumber men, the proportion of minorities and international students is increasing, many are the first in their families to attend college, many have no choice but to work while attending school, many are non-traditional age (that is, older than 18), and some may be single parents or military veterans struggling to juggle college life with other priorities.
It is a real tribute to all of these people (and to our society in general) that they are attempting to earn college degrees, but it is unrealistic and even unreasonable to expect this population to complete college in the same timeframe as their privileged predecessors. It is simply not an even playing field.
If we are going to go forward with a rating system of colleges, let's make sure that the system is constructed not by politicians, bureaucrats, or others who do not understand how colleges work. Let's instead come up with a system that really measures what it sets out to.