Reformers Should Respect Conventions of Scholarly Discourse

If CEDR sought to help students in the "Assessing the Determinants and Implications of Teacher Layoffs" study, it should have respected the principles of scholarship.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

A pioneering econometric study mistakenly concluded that slavery was profitable by assuming that half of the mules in Kentucky were male and half were female. Not knowing that mules are sterile, the Harvard professors projected x amount of profits due to mule reproduction.

I do not claim to have as many brain cells as when I was a graduate student deciphering those equations, but back then, scholars obeyed the conventions of honest discourse, and that provided an advantage in debating the significance of evidence produced by algorithms.

The Center for Education Data and Research (CEDR), however, continues the trend of violating the norms of social scientific inquiry in issuing another politicized study. This one is entitled, "Assessing the Determinants and Implications of Teacher Layoffs." CEDR is based in -- you guessed it -- Bill Gates' stomping ground of Washington. It announced that effectiveness-based layoffs using a value-added model simulation would increase reading test scores by .20 of a standard deviation. Apparently the students of 145 teachers would see those gains.

We should mend, not end seniority. If CEDR sought to help students, it should have respected the principles of scholarship, and explicitly reported the information necessary to evaluate both the benefits and the costs of acting on their hypotheses. Above all, it would have accurately characterized the positions of its opponents.

After multiple readings of the report, I cannot tell whether CEDR ran a simulation on its entire data base of teachers who received lay-off notices, or if they ran a separate simulation for each of Washington's school districts, as would be the situation with real-world layoffs. If it was the former, their study should be labeled as counter-factual theory with no relevance for policy discussions.

If it was the later, CEDR should have fairly acknowledged the benefits of the seniority system, and the costs of destroying this collectively bargained institution, when presenting its case.

In all six versions of the value-added simulations, under seniority, teachers with less effectiveness in raising test scores were laid off. With seniority, a black student would be 0.96 percent more likely to have a teacher who could be laid off, but more black teachers would keep their jobs. CEDR did not see fit to bless us with the latter number so that citizens could make their own cost benefit analysis.

In the most ideal version of scenario #4, it was assumed that principals, who were freed of contractual checks and balances, would wisely use their new power -- so test-driven layoffs could increase reading value-added by .0034 to .016. Principals could thus fire a more effective, senior teacher by claiming that the younger teacher showed more potential. Presumably the principal would have enough of an understanding of a statistical black box based on Aijkst = aAijks(t-- ]1)+Xit.B+T"jt+εijkst in order to make an informed judgment.

But afterward, what teacher would dare claim to challenge principals' claims that nonstop test prep, curriculum narrowing, and slavishly following scripted curriculum are benefiting students? What teacher would commit to a school where it is more difficult to raise test scores when their career could be ended by the numbers from a statistical black box and/or management's misunderstanding of those numbers and/or management's power to simply claim that a teacher with a lower salary is more effective and/or management's adopting whatever new hypotheses that come down the pike?

CEDR emphasized that its simulation was designed to protect the jobs of younger teachers who were "more effective than the average teacher in the state." Even if test scores were a valid measurement of teacher effectiveness, that goal is irrelevant. The question is whether their model could protect young teachers who were effective in increasing student performance in schools facing comparable challenges. In one quick passage, the report acknowledged the issue. Unfortunately, CEDR did not see fit to provide educators with real knowledge of actual systems, or with enough information to determine whether their models reflect enough knowledge of the real world to contribute to policy discussions.

Popular in the Community

Close

What's Hot