When it comes to identifying how well an institution is doing on student success, even the most basic data element is difficult to define. Indeed, how does one define "student?" Is it a single individual sitting in a classroom? Does a student also need to be enrolled in other classes to count -- say 12-16 credits of classes? Do students even need to be in a classroom, or might they be studying online, independently, or perhaps working one-on-one with a mentor? If a student drops out or elects fewer credits in a subsequent term, does that automatically mean that he or she is a "drop-out"? And, if the student later returns to the first institution, should the clock start again? What if a student simply transfers to another institution -- perhaps maintaining continuous enrollment or, alternatively, skipping a term? And how do we know when a drop-out truly is gone for good? These may seem to be arcane questions, but all have an impact on institutional, state, regional, and national statistics as there are large numbers of students described by each of these variations, and others.
The earliest statistical studies of attrition appear to go back only to the early 1970s as a plethora of new institutions appeared, many supported by public funds, and that fact made student success (or a lack of it) more than just a curiosity or a personal issue. Right from the start, questions like those above needed to be answered or, at least, conventions for measurement needed to be developed. For example, one reason that Empire State College (SUNY) launched one of the first-ever cost effectiveness studies of a postsecondary institution was the amount of funding provided at that institution and the interest generated by it. Over time, one convention emerged that helped reduce the challenge: the focus on first-time, full-time students. It was basically easy to establish a cohort with each incoming class in the fall, and if one made the assumption that almost all students planned to complete their studies where they started and within the traditional four-year time frame, a tangible pipeline was established and a scorecard of student success could be developed. It worked well, especially for traditional four-year liberal arts colleges that primarily served young people from 17- to 22-years-of age. And over time, the U.S. Department of Education added another convention when it began asking for data on the number who completed studies in 150 percent of the normal time frame (six years for a baccalaureate degree, three years for an associate degree), thus making it easier for institutions to allow for some slippage in attrition rates without admitting failure. If only the institutional options had stayed simple.
Today, the traditional 17- to 18-year-old, first-time, full-time freshman is almost an anomaly. Certainly, there still are many of them, but there are many other variations on that theme. For example, many students choose to live at home and attend a local community college for a year or two, always planning to transfer later to another institution. Others realize early on that they need to mature and/or earn more money before completing their education and so they stop for a time. Many adults first attend college while working full time and maintaining family responsibilities --they do not even consider full-time study as an option. And then there are others who serve in the military where they take some formal college courses. These students might even enroll full time upon the end of their service, but are not considered "first-time."
In fact, these are only a few aspects of the nightmare that faces anyone trying to do a meaningful study of attrition and student success. Any variation from continuous full-time study at a single institution makes a student's effort seem like failure. Yet, at Saint Leo University we have students enrolled who began their pursuit of a bachelor's degree more than 15 years ago and are still enrolled, taking one class at a time.
We need to do better by creating mechanisms to follow students across institutions as the National Student Clearinghouse is attempting to do. But, to date, the reported completion statistics are, indeed, grim with 40 to 60 percent appearing to drop out. And as students take advantage of the proliferating options for study, the data will look even worse. Ironically, the only one who really can determine if a student has dropped out, most likely, is the student himself -- and even the student may not be sure!