Teacher Data Reports: What Should We Be Measuring?

01/10/2011 01:51 pm ET | Updated May 25, 2011

This week, along with many other New York City Teachers, I received an e-mail about the implementation of Teacher Data Reports. The opening of the message, peppered with platitudes, had a surprisingly affable tone -- one clearly meant to ensure skeptical educators that such reports will be anything but career jeopardizing, publicly humiliating, and a narrow evaluation of their effectiveness.

"We are happy to welcome you back to your classrooms after a well-deserved winter break. With the beginning of the new year, we are writing today to spread the word that teachers in the DOE are now being asked to verify their class lists from 2005-06 to 2009-10 for the Teacher Data Reports."

The subtext? Big Brother is watching.

I can already presuppose counterarguments to my criticism of this teacher rating system: If you're doing your job, is there anything to worry about? In what job are employees' performances not evaluated? Is augmented transparency not critical to fixing many of the broken institutions in America? These may be fair claims, but inherent to this rating system exists a fundamental flaw.

For the Teacher Data Reports, test scores serve as the dominant metric through which teacher effectiveness is evaluated. While some mechanisms to differentiate the data are embedded within this system-- grade level, subject area, years of experience, special education status-- passing rates remain the primary method for determining a teacher's value.

And so endures an unceasing emphasis on standardized testing. According to this system, good teachers are those who can teach to the test. And in a jammed packed school year, where there is hardly enough time to cover necessary content, 'teaching to the test' means figuring out exactly what is on a specific exam and drilling students on those topics and skills.

High school teachers are currently exempt from these reports, but let's consider the impact they would have on a subject such as Global History. We'll use the following hypothetical:

A dedicated and veteran social studies educator, Teacher A scans through a Global History Regents, dutifully noting every topic that makes an appearance. Teacher A cross-references the exam with two others from previous years, making tallies next to recurring content. Analyzing this information, Teacher A begins organizing a curriculum by topics most likely to appear on the test. Teacher A, a socially conscious individual who values linking historical events to those of the present, would like to included in-depth units on the Gulf War, Iranian Revolution, and the Korean War. After all, these countries and regions of the world are now playing an increasingly important role in global politics. Unfortunately, after mapping out units and leaving ample time for review, Teacher A realizes there is no time to cover these topics. Conceding that the content of the test to be a priority, both for job security and for students to acquire the capital that accompanies high test scores, Teacher A focuses only on information sure to make the test.

Testing has a stifling impact on the ingenuity and creativity of many excellent teachers. Unfortunately, teaching and learning are becoming so micromanaged, it is difficult to bring unique talents and knowledge to the classroom. Instead, teachers are forced to operate as droids, functioning within the limits of confining curricula. Since when did conformity become such a marketable talent, I wonder?

There is more to teaching than testing. As such, teacher evaluations cannot be centered only around test scores. In truth, conducting a fair evaluation of teacher worth would be an enormous task. It would require the evaluator to examine pages of lesson plans, curriculum maps, and student work. It would mean actually observing this same teacher deliver a lesson and interact with students. And while it might be a time-consuming, tedious job, it would provide an accurate portrayal of what that teacher brings to their students.

Conversely, setting up a computer program to analyze test results is pretty easy. It requires no more work than punching numbers into a keypad. Sadly, this process leaves little room for nuanced analysis and is completely antithetical to the care and consideration lying the heart of good teaching. Do we not owe it to teachers and students to be a little more thorough?