Accountability and Evidence

The way we can find out what works is to compare schools or classrooms assigned to use any given program with those that continue current practices. Ideally, schools and classrooms are assigned at random to experimental or control groups. That's how we find out what works in medicine, agriculture, technology, and other areas.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2015-03-26-1427391153-864256-03_26_15Scales_500x3902.jpg

Illustration by James Bravo

At some level, just about everyone involved in education is in favor of "using what works." There are plenty of healthy arguments about how we find out what works and how evidence gets translated into practice, but it's hard to support a position that we shouldn't use what works under at least some definition of evidence.

However, the dominant idea among policy makers about how we find out what works seems to be "Set up accountability systems and then learn from successful teachers, schools, systems, or states." This sounds sensible, but in fact it is extremely difficult to do.

This point is made in a recent blog post by Tom Kane. Here's a key section of his argument:

[In education] we tend to roll out reforms broadly, with no comparison group in mind, and hope for the best. Just imagine if we did that in health care. Suppose drug companies had not been required to systematically test drugs, such as statins, before they were marketed. Suppose drugs were freely marketed and the medical community simply stood back and monitored rates of heart disease in the population to judge their efficacy. Some doctors would begin prescribing them. Most would not. Even if the drugs were working, heart disease could have gone up or down, depending on other trends such as smoking and obesity. Two decades later, cardiologists would still be debating their efficacy. And age-adjusted death rates for heart disease would not have fallen by 60 percent [as they have] since 1980.

Kane was writing about big federal policies, such as Reading First and Race to the Top, which cannot be evaluated because they are national before their impact is known. But the same is true of smaller programs and practices. It is very difficult to look at, for example, more and less successful schools (on accountability measures) and figure out what they did that made the difference. Was it a particular program or practice that other schools could also adopt? Or was it that better-scoring schools were lucky in having better principals and teachers, or that the school's intake or neighborhood is changing, or any number of other factors that may not even be stable for more than a year or two?

Accountability is necessary for communities to find out how students are doing. All countries have some test-based accountability (though none test every year, as we do from grades 3 through 8), but anyone who imagines that we can just look at test scores to find what works and what doesn't is not being realistic.

The way we can find out what works is to compare schools or classrooms assigned to use any given program with those that continue current practices. Ideally, schools and classrooms are assigned at random to experimental or control groups. That's how we find out what works in medicine, agriculture, technology, and other areas.

I know I've pointed this out in previous blog posts, and I'll point it out in many to come. Sooner or later, it has to occur to our leaders that in education, too, we can use experiments to test good ideas before we subject millions of kids to something that will probably fail to improve their achievement. Again.

Popular in the Community

Close

What's Hot