iOS app Android app More

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors
Matthew Di Carlo

GET UPDATES FROM Matthew Di Carlo
 

The Evidence on Charter Schools

Posted: 11/23/11 01:48 PM ET

In our fruitless, deadlocked debate over whether charter schools "work," charter opponents frequently cite what is often called the  CREDO study (discussed here), a 2009 analysis of charter school performance in 15 states and the District of Columbia. The results indicated that overall charter effects on student achievement were negative and statistically significant in both math and reading, but both effect sizes were tiny. Given the scope of the study, it's perhaps more appropriate to say that it found wide variation in charter performance within and between states -- some charters did better, others did worse and most were no different. On the whole, the size of the aggregate effects, both positive and negative, tended to be rather small.

 

Recently, charter opponents' tendency to cite this paper has been called "cherrypicking." Steve Brill sometimes levels this accusation, as do  others. It is supposed to imply that CREDO is an exception -- that most of the evidence out there finds positive effects of charter schools relative to comparable regular public schools.

 

CREDO, while generally well-done given its unprecedented scope, is a bit overused in our public debate -- one analysis, no matter how large or good, cannot prove or disprove anything. But anyone who makes the "cherrypicking" claim is clearly unfamiliar with the research. CREDO is only one among a number of well-done, multi- and single-state studies that have reached similar conclusions about overall test-based impacts.

 

This is important because the endless back-and-forth about whether charter schools "work" -- whether there is something about " charterness" that usually leads to fantastic results -- has become a massive distraction in our education debates. The evidence makes it abundantly clear that that is not the case, and the goal at this point  should be to look at the schools of both types that do well, figure out why, and use that information to improve all schools. http://shankerblog.org/wp-includes/js/tinymce/plugins/wordpress/img/trans.gif

 

First, however, it's important to review the larger body of evidence that corroborates CREDO's findings. For example, this 2009 RAND  analysis of charter schools in five major cities and three states found that, in every location, charter effects were either negative or not discernibly different from regular public schools'. As one might expect, charters tended to get better results the more years they'd been in operation.

 

Similarly, a 2010 Mathematica report presented the findings from a randomized controlled study of 36 charter middle schools in 15 states. The researchers found that the vast majority of students in these charters did no better or worse than their counterparts in regular public schools in terms of both math and reading scores, as well as virtually all the 35 other outcomes studied. There was, however, underlying variation -- e.g., results were more positive for students who stayed in the charters for multiple years, and those who started out with lower scores.

 

A number of state-specific studies buttress the conclusion of wide variation in charter effects.

 

paper published in 2006 found slightly negative effects of charters in North Carolina (CREDO's results for North Carolina were mixed, but essentially found no difference large enough to be meaningful). There was a positive charter impact in this  paper using Texas data, but it only surfaced after 2-3 years of attendance, and the effect sizes were very small (this Texas analysis found the same for elementary and middle but not high schools, while CREDO's evaluation found small negative effects).

 

A published analysis of charters in Florida found negative effects during these schools' first five years of attendance, followed by comparable performance thereafter (the reading impact was discernibly higher, but the difference was small; it's also worth noting that CREDO's Florida  analysis found a small positive effect on charter students after three years of attendance), while a 2005 RAND report on California charters revealed no substantial difference in overall performance (also see herehere and here). Finally, a 2006  study of Idaho schools found moderate positive charter effects, while students attending Arizona charters for 2-3 years had small relative gains, according to a 2001 Goldwater Institute analysis (CREDO  found the opposite).

 

In an attempt to "summarize" the findings of these and a few other studies not discussed above, the latest  meta-analysis from the Center for Reinventing Public Education (CRPE) found that charter and regular public school effects were no different in middle school reading and high school reading and math. There were statistically discernible positive impacts in middle school math and elementary school math and reading, but the effect sizes were very modest. The primary conclusion, once again, was that "charters under-perform traditional public schools in some locations, grades, and subjects, and out-perform traditional public schools in other locations, grades, and subjects." This lines up with prior  reviews of the literature.

 

Finally, just last week, Mathematica and CRPE released a report  presenting a large, thorough analysis of charter management organizations (CMOs). In order to be included in the study, CMOs had to be well-established and run multiple schools, which means the schools they run are probably better than the average charter in terms of management and resources. The overall results (middle schools only) were disappointing -- even after three years of attendance, there was no significant difference between CMO and comparable regular public school students' performance in math, reading, science or social studies. Some CMOs' schools did quite well, but most (14 of 22) were no different or worse in terms of their impact.

 

Unlike some other interventions that dominate today's education policy debate, most notably test-based teacher evaluations, there is actually a somewhat well-developed literature on charter schools. There are studies almost everywhere these schools exist in sufficient numbers, though it is important to point out that the bulk of this evidence consists of analyses of test scores, which is of course an incomplete picture of "real" student learning (for example, a couple of studies have found positive charter effects on the likelihood of graduating). It also limits many of these evaluations to tested grades.

 

In general, however, the test-based performance of both charter and regular public school varies widely. When there are differences in relative effects, positive or negative, they tend to be modest at best. There are somewhat consistent results suggesting charters do a bit better with lower-performing students and other subgroups, and that charters improve the longer they operate. But, on the whole, charters confront the same challenges as traditional district schools in meeting students' diverse needs and boosting performance. There is no test-based evidence for supporting either form of governance solely for its own sake.

 

So, if there is any "cherrypicking" going on, it is when charter supporters hold up the few studies that find substantial positive effects across a group of schools in the same location. This includes, most notably, very well-done experimental evaluations of charter schools in New York City and  Boston (as well as a couple of evaluations of schools run by KIPP, which are dispersed throughout the nation, and a lottery study of a handful of charters in Chicago).

 

Ironically, though, it is in these exceptions where the true contribution of charter schools can be found, as they provide the opportunity to start addressing the more important question of why charters produce consistent results in a few places. Similarly, buried in the reports discussed above, and often ignored in our debate, are some hints about which specific policies and practices help explain the wide variation in charter effects. That's the kind of "cherrypicking" that we need - to help all schools.

 

This research -- and what it might mean -- is discussed in the second and  third parts of this series on charter schools.

 

Follow Matthew Di Carlo on Twitter: www.twitter.com/shankerblog