Huffpost Education
The Blog

Featuring fresh takes and real-time analysis from HuffPost's signature lineup of contributors

TNTP Headshot

Inflated Ratings Don't Make the Grade

Posted: Updated:

2014-04-11-BaltimorePSTMahon64_685x257.jpg

By Ian Scott

This post was originally published on the TNTP Blog.

This week, Indiana became the most recent state whose teacher ratings, based on a new evaluation system, has come back with nearly all teachers designated effective or highly effective. It’s a familiar old challenge that many states have sought to correct in recent years, pushing for new evaluation rubrics and differentiated rating scales. The goal of these new evaluation tools is clear: Produce more accurate information about teacher performance and give teachers higher quality feedback to help them improve. But with teacher ratings still coming back so uniform, even with the new tools, we have to ask: What will it take to get a more accurate distribution of teacher ratings?

In Indiana, fewer than 3 percent of teachers were considered in need of improvement or ineffective, consistent with other states that have recently implemented new evaluation tools, like Georgia, Florida and Massachusetts. These new evaluation models—with multiple measures and higher, more rigorous standards for teachers—are much better on paper than were their predecessors. But an improved tool does little good if district and school leaders aren’t prepared to use it accurately.

That’s a problem—because without evaluation tools that reflect real differences in teacher performance, states, districts and school leaders don’t have the information they need to support teachers or to focus on recognizing and retaining their top performers. It’s a loss for everyone—teachers, parents and especially kids.

Of course, it would be great news for Indiana if the latest ratings truly were reflective of the real performance of teachers statewide, but such an argument isn’t credible. After all, if they were, we should expect to see nearly all students making adequate progress with their learning. That isn’t the case. Even first-year teachers in Indiana were rated almost uniformly effective—but we’ve seen in our own evaluations of TNTP Teaching Fellows, in Indiana and elsewhere, that more than 4 percent of first-year teachers are in need of improvement (unsurprisingly, given that we know that a teacher’s first year is critical for their professional growth). As in any performance-based profession, a spectrum of effectiveness should be expected among teachers.

So why are we seeing inflated ratings so consistently? Deep cultural resistance to honest ratings is certainly a major factor. But ultimately, evaluation tools are only as effective as school leaders are at implementing them—and in our work with states and districts, we’ve also seen administrators struggle with a number of concrete skills that contribute to inflated numbers. School leaders need to be able to identify specific evidence of a teacher’s strengths or growth areas, for example, and deliver constructive feedback with actionable next steps. And they have to be able to look at data to help them understand exactly what impact a teacher’s instruction is having on student outcomes.

In the small number of schools where we’ve seen time invested in developing these skills, teacher ratings have been more reflective of student performance—and teachers are more likely to be getting the support they need to improve. How can more schools get there? From our perspective, there are several steps states and districts can take to make sure that in the next evaluation cycle, the tool works the way it’s supposed to:

Identify positive outliers. Look for districts and schools that seem to be getting a more realistic distribution of ratings—and try to pinpoint what they’re doing differently. The first step in improving evaluation rollout is defining what effective implementation looks like. Based on our work, we think accuracy (in the form of a correlation to actual student outcomes) and targeted strategies that lead to teacher growth are essential.

Clearly define good practice. It’s critical to establish robust, consistent definitions of what proficiency looks like within each area for which teachers are evaluated. Using video to highlight the differences between effective teachers in action and those needing improvement is one way to ensure that teachers and school leaders have a common understanding of what they’re aiming for.

Give evaluators more training. This work is time intensive, but there are also ways to leverage technology to train evaluators more efficiently on best practices. TNTP has developed online tools that help principals practice delivering critical feedback so that they are more confident in how to frame the conversation. This allows them to better ensure that teachers fully understand what they’re expected to do and how to proceed with improving their practice. Of course, districts also need to give schools the time and resources to observe teachers and give frequent feedback. That might mean establishing hybrid roles for master teachers who can spend time observing colleagues, or finding creative ways to take building management responsibilities off the plates of school leaders.

Inflated ratings don’t mean that it’s time to throw in the towel; after all, decades-old practices and habits don’t change overnight. Rather than going back to the drawing board—or just giving up—districts and states need to invest more resources in empowering school leaders to get this right. Done well, an effective evaluation process contributes to a strong instructional culture: high expectations, a consistent definition of effective instruction, and strategic supports for teachers. That’s a target worth aiming for, even if it takes a few tries to get there.

Ian Scott is Partner, New Teacher Effectiveness and Performance Management at TNTP.

Subscribe to the TNTP Blog.