Teacher Development At Center Of New Center For American Progress Studies

The Teacher Quality Question

Jordan Henry, a Los Angeles high school teacher, recently received his new score as part of the city's pilot program to take student test scores into consideration when testing its teachers.

In the program, teachers receive a numerical value to represent the degree to which they influenced their students' learning.

His score? Middle of the scale.

"It's a big ho hum," Henry said Tuesday. "What does this tell me?"

Participating in the pilot -- now facing a lawsuit from the Los Angeles teachers' union -- landed Henry smack in between the policy and reality of new methods designed to help teachers improve, a task less simple than it seems.

And as states take on the task, either through new laws or promises made to the federal Department of Education to escape the strictures of the No Child Left Behind Act, they're facing a dizzying array of rubrics and coaching methods -- and often coming up short.

A pair of papers released Tuesday by the Center for American Progress aims to give a better idea of which policies can work when it comes to identifying teachers' skill level, and to improving those teachers who need the extra help.

Bolstering teachers' evaluations and professional development has emerged as a popular way for school districts and states to attack the problem of underperforming schools. These tactics are regularly endorsed by data-driven education reformers and the Obama administration. Programs that encourage data-based evaluations, such as the federal Race to the Top competition, have spawned an entire industry of professional development and evaluation providers. These strategies rely on the understanding that teacher quality is the most important in-school factor in determining student success, an assertion some criticize for ignoring contextual factors such as poverty.

But according to a new study by Robert Pianta, dean of the Curry School of Education at the University of Virginia, there are few professional development methods that have been shown to effectively improve student learning.

"It is a travesty that despite districts spending thousands of dollars per teacher each year on professional development, these dollars are most often spent on models that are known to be ineffective," Pianta wrote in the paper, "Teaching Children Well: New Evidence-Based Approaches to Teacher Professional Development and Training."

And while teacher evaluation systems are often used to fire ineffective teachers who are said to be bringing down student achievement, Pianta argues that the evaluations should instead be used to target areas where teachers can improve and increase their skills.

Pianta's paper promotes methods his research group developed to enhance teaching skills: MyTeachingPartner, an online portal for professional coaching, and CLASS, a specific method of classroom observation. CLASS is now used by Head Start pre-school programs across the country.

The best professional development programs, Pianta writes, will include videos for teachers and one-on-one coaching. Students learn most from "positive, cognitively demanding student-teacher interactions."

But these practices define teacher effectiveness as increasing student test scores, noted Segun Eubanks, the National Education Association's teacher quality director.

"It's based on a definition of effectiveness ... that is based almost solely on a teacher's ability to increase scores on narrow standardized tests that are looking at one measure of learning," Eubanks said at the reports' release Tuesday. "I think tests are extremely important ... [but] we sell ourselves short if we define our measure of effectiveness the way we've defined it so often on .. test scores."

Teacher evaluations are also increasingly using student test score data to measure teacher quality. Washington, D.C. and 32 states have changed their teacher evaluation policies over the last three years, with almost half of all states now requiring that these ratings include evidence of student learning.

But teacher evaluation gets even more complicated in high school, where few teachers teach tested subjects. The second CAP paper -- this one by Brown University Professor John Tyler, titled "Designing High Quality Evaluation Systems for High School Teachers" -- addresses those kinks.

Tyler writes that dropout decisions are primarily made in high school, and that improving teacher quality through evaluations could help students stay in school. "The research-based linkage is that student engagement is related to dropping out and teachers’ behaviors and practices are, in turn, related to student engagement," he wrote. In some cases, evaluation systems can create perverse incentives for teachers to encourage dropping out to boost scores.

Another problem in high school teacher evaluation, Tyler writes, are varying "pathways." Most data-based evaluations of teachers are based on "value-added formulae" that aim to isolate a teacher's effect on student test scores. Value-added measurements rely on an estimate of predicted student achievement. But, high school students have less consistent educational backgrounds and previous courses in common -- what Tyler calls "pathways" -- making value-added data harder to calculate.

In his paper, Tyler suggested solutions such as using exams that test teachers' content knowledge; recording classes so teachers learn from watching themselves; tailoring value-added measures to the high school level; and using all available information, despite its flaws.

"Anything you talk about can be couched in statistical measurement error," Tyler said Tuesday. "Any time you're measuring complex behavior, there's going to be complex measurement error."

Eubanks, though, criticized the focus on professional development and teacher evaluations as a silver bullet.

"We always say, let's find that one thing," Eubanks said. "If we do that one thing, everything is going to be better again."

Popular in the Community

Close

What's Hot