THE BLOG
03/11/2011 01:49 pm ET Updated May 25, 2011

The New York Times Needs to Do Its Homework on Teacher Evaluations

An editorial on March 7 in The New York Times, titled "Fairness in Firing Teachers," has me wondering whether the Times editors understand much about how teachers -- in New York City and elsewhere -- are evaluated. The editorial makes some stunning statements that simply don't comport with reality.

First, there's this: "Most reasonable people would agree that, when layoffs become necessary, teachers should be let go through objective evaluations of how well they improve student performance, and not merely on the basis of seniority. The problem throughout most of the country is that evaluation systems are not in place. In New York City, only about 12,000 of 80,000 teachers have been evaluated, based on their students' grades on standardized tests."

This, the opening paragraph of the editorial, is factually incorrect. It is untrue that "throughout most of the country ... evaluation systems are not in place." Just about every school district in the country has a teacher evaluation system.

The problem isn't that evaluation systems don't exist but that most aren't very good. As The New Teacher Project demonstrated with its 2009 report, "The Widget Effect: Our National Failure to Acknowledge and Act on Differences in Teacher Effectiveness," most teacher evaluation systems in the U.S. rate the vast majority of teachers -- upwards of 99 percent -- as effective. So the problem is not that teachers aren't regularly evaluated -- they are -- but rather that the evaluations are mostly meaningless.

Also, it's untrue that "only about 12,000 of 80,000 teachers [in New York City] have been evaluated." New York City teachers are routinely evaluated by their principals, a point made clear in a different New York Times piece on March 7.

What the Times editorial writer probably meant is that only 12,000 out of 80,000 New York City teachers have value-added scores and thus receive Teacher Data Reports. That's because value-added scores currently exist only for those who teach English or math in grades 4-8 -- hence, just 15 percent of the city's teachers receive the data reports. But that's not the same thing as claiming that only 15 percent of the city's teachers are evaluated.

It's interesting to note that this sentence would have been correct if a certain comma had been omitted: "In New York City, only about 12,000 of 80,000 teachers have been evaluated, based on their students' grades on standardized tests." Kill the comma between "evaluated" and "based," and you suddenly have a true statement. Keep the comma and it's inaccurate. Behold the power of punctuation!

The more important point here -- which the editorial writer fails to make -- is that 85 percent of New York City teachers aren't teaching subjects or grades for which value-added scores can currently be calculated. Eighty-five percent!

Another statement worth scrutinizing is this: "teachers should be let go through objective evaluations of how well they improve student performance." There seems to be an implicit assumption in the editorial that a system of "objective evaluations" exists but that some parties -- teachers? their unions? -- oppose it. It doesn't take too much reading between the lines to see that the editorial is suggesting value-added scores of teachers are these "objective evaluations." (Otherwise, why the reference to 12,000 out of 80,000 teachers in New York City having been "evaluated"?)

Actually, no "objective evaluation" system exists. Not even those that rely heavily on value-added scores.

Plenty of evidence -- anecdotal (see Michael Winerip's piece in the Times on March 7) as well as hard research -- indicates that constructing a value-added model requires lots of decisions, the most important of which is the inclusion or exclusion of certain variables. These decisions are, of course, inherently subjective. The New York City model contains 32 variables.

Whether the end product of a subjective process like constructing a value-added model can be called "objective" is doubtful.

What is less doubtful is that value-added models occasionally misidentify high- and low-performing teachers, as those both in favor of and opposed to using student test scores in teacher evaluations agree. How frequently such mistakes happen is a matter of heated debate but that they do isn't really contested. With that in mind, it seems a stretch to call any system of evaluating teachers "objective."

Objectivity in evaluations -- of anything or anybody, not just of teachers -- is a myth, even if the evaluations are rooted in objective-looking numbers. One can certainly make the case that value-added scores are less subjective than administrators' observations, but that's not quite the same thing as saying they are entirely objective.

The Times editorial writer calls for the New York state legislature to "make sure that the scoring system weighs student performance [in teacher evaluations] most heavily." What they seem blind to is the fact that measuring student performance in any kind of meaningful way is an art, not a science, and doing it well means using multiple measures (not just standardized test scores).

We remain far from a system that measures student or teacher performance well. So while it is easy for politicians and pundits -- from Mayor Michael Bloomberg and New Jersey Gov. Chris Christie to Michelle Rhee and The New York Times editorial board -- to call for an end to seniority considerations in layoff decisions, it's not at all clear what should be used instead of seniority (especially in the short-term, with layoffs imminent). The alternatives -- like a robust evaluation system that could accurately and comprehensively capture educator effectiveness -- exist more in theory than in practice.