Huffpost Education
The Blog

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors

John Thompson Headshot

Why Can't School "Reformers" Listen to Education Experts?

Posted: Updated:

This week's debate between Eric Hanushek and Diane Ravitch, exemplifies the tendency of true believers in data-driven policies to refuse to communicate with educators. Hanushek became so preoccupied with name-calling that he forgot that advocates of risky policy gambles have just as much of a burden of proof for their "reforms" as do advocates of more balanced policies.

Based on statistical models, Hanushek argued that the "deselection" of ineffective teachers would be more beneficial than professional development to improve teacher quality. I don't necessarily disagree.

Ravitch would concentrate on improving teacher education and retention. Neither would I disagree.

But why would Hanushek not welcome a synthesis of his theories with Ravitch's realism? After all, Ravitch is one of our nation's premier scholars. As as one reviewer explained, "she's had to learn the hard way (how to develop) a healthy skepticism about silver-bullet notions of reform."

Hanushek's calculations are based on correlations between test scores and future earning, not evidence of improved learning, which would be fine if he listened to the education experts about interpreting his numbers. To my knowledge, Ravitch has not tried to micromanage Hanushek's calculations of algorithms. The economist should welcome Ravitch's insights into how schools and systems actually function.

In an initial post, Hanushek implied that he was willing to listen to explanations why it is a lousy idea to use experimental statistical models to fire teachers. He stated that it is not necessary to use test scores to identify the lowest performing teachers. In a followup post, he admitted that "other, unmeasured things beyond test scores are also important to students and to society."

But, Hanushek claimed that there is "no reason" to believe that the teaching of those unmeasured skills would be hurt by the greater, high-stakes, measuring of the skills that can be quantified. Hanushek thus seems to deny Campbell's Law, "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

Worse, Hanushek proclaimed that "there is no reason to believe that teachers at the bottom in terms of producing measured skills are anything but the bottom in producing useful unmeasured skills."

No reason?!?!? Has the economist not read the overwhelming educational research that explains why it is more difficult to raise test scores in schools serving intense concentrations of generational poverty?

In the follow-up piece, Hanushek condemned Ravitch's precise and balanced positions as "Red Herrings," "a red herring," "a red herring," " a red herring," "a red herring," "a red herring," "red herrings," and "red herrings." As he used that label eight times, Hanushek, complained that "Diane wants to introduce the idea that, while there are teachers who are harming kids, we should not deal with them because there might be some residual uncertainty about the very last teacher who is in this group." Actually Ravitch wrote, "Nobody disagrees that there are ineffective teachers and that, if they are unable to improve, they should be removed."

The economist forgot that it is incumbent on him to provide a reason to believe that his theory would have positive real-world effects. Hanushek, for instance, has the burden of proof of showing that there are a long line of teachers who could step into the horrific conditions of our lowest-performing schools and somehow produce "average" scores. But instead, he argued that there is "no reason" to believe his experimental results were inaccurate.

If the economist refuses to listen to social science, perhaps he should consider the laws of "rational expectations," and "supply and demand." Why would talented teachers take the risk of teaching in schools that are under the gun for low test score growth, with high levels of truancy, disorder, and violence, in the hope that those factors would not hinder their ability to meet test score targets?

Two-way communication is essential because there is no way of determining whether a teacher's or a principal's failure to reach growth targets were due to the educator's weakness or due to dysfunctional policies imposed by their bosses. Administrators, alone, are last people who should be allowed to determine whether a failure to adequately increase student performance was the educator's fault, or whether it was due to management's mandates that produced extreme concentrations or more difficult-to-educate students, management's decision to starve alternative services so that it became impossible to enforce attendance and disciplinary policies, or the bosses' defensive tactic of mandating non-stop test prep.

And that gets back to Hanushek's first logical mistake. Yes, there are reasons why the use of flawed test data to fire teachers at the bottom would undercut the effectiveness of the majority of teachers. To a bureaucrat with a hammer, every teacher can look like a nail. As long as management is evaluated by outcomes on primitive bubble-in tests, there will be pressures by administrators protecting their rear ends to mandate counter-productive rote instruction.

Last week, another "reformer" wrote sarcastically about the 20,000 teachers who follow Diane Ravitch on twitter. Accountability hawks are making a mistake, however, in not recognizing Ravitch's following as a potentially great asset. We practitioners could be invaluable for theorists trying to make sense of our complex educational system. What we cannot do, however, is go along with the idea that simplistic, hypothetical models can drive the reform of our complex and diverse educational systems.