Signing Off from One More Teacher Preparation Ranking

It has become increasingly clear that NCTQ ratings and rankings are but one more compliance activity, like accreditation, foisted on teacher preparation programs. The activity takes valuable faculty and staff time away from real work with real students.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

On Monday, June 16, I received an email from the National Council on Teacher Quality (NCTQ) with their latest rankings of teacher preparation programs in advance of Tuesday's public release. While as of this writing, I do not know the ranking of the Curry School's teacher preparation program(s), I do know that last week I endorsed a request from my colleagues to no longer participate in the NCTQ process. So in the words of the boxer Roberto Duran, the Curry School declares "no mas" to the NCTQ effort to benchmark the quality of teacher preparation programs; here's why.

I am an unabashed advocate of better evaluation of teachers and teacher preparation. Admittedly teacher preparation is complex and tough to study in rigorous ways that translate directly into scalable models or policies or that drive detectable improvements. But complex problems are not impossible problems -- and for too long teacher preparation has hidden behind complexity as an out for doing the heavy scientific lifting of figuring out how to produce effective teaching on a scalable basis. By comparison with other tough problems, if 20 years of research could make real progress on treating something as complex as AIDS, it seems we could at least show some progress improving teacher preparation.

For two decades my research focused on developing and testing methods for observing and improving teachers' interactions with students. Our efforts aimed to identify teacher behaviors that produce student learning (not just test scores), and to create and evaluate training and support models that produce and improve those behaviors. We have been reasonably successful in building an evidence base for measurement and production of effective teacher behavior based on quantitative metrics, experimental designs, and many thousands of teachers. And our research is only one example of work making real progress, not much of which is reflected more broadly in teacher preparation accreditation or credentialing systems.

And for the past seven years I have been a dean at a school of education with a teacher preparation program. I have watched as faculty members steadfastly manage the challenges of providing rigorous and intensive training to 19-25 year olds to prepare them to work in today's classrooms and schools. They do this work in the context of state regulations for training and certification of teachers that make little or no sense, have very little connection to evidence (of any sort), and no potential to drive improvement of teaching in public education. Our program also spends hundreds of hours ensuring compliance with accreditors, a process requiring documentation on top of documentation -- some focused on describing what we do with our students and some focused on how we know its working (or not).

NCTQ entered this world of "evidence and documentation" a few years ago. In what was experienced by most teacher preparation programs as a fairly coercive process, NCTQ requested that programs submit an assortment of documents -- course syllabi, program handbooks, evaluation rubrics, and on and on. These were scored on a set of prescribed standards and programs ranked. We were told that if we did not submit materials we would still be scored (assigned a "zero") and labeled as non-participants. The Curry School received its scores (not great) and remained engaged even though many of my colleagues at other institutions refused to participate or withdrew. We felt there may be some value in the analysis and were willing to check it out.

But it has become increasingly clear to me that the NCTQ ratings and rankings are but one more compliance activity, like accreditation, foisted on teacher preparation programs. Like accreditation, the NCTQ activity takes valuable faculty and staff time away from real work with real students, and although it can be helpful in calling attention to general areas needing work, it adds no value beyond the feedback we get from accreditation (itself not that detailed) and is in large part redundant with accreditation. We don't need one more tax on valuable time.

Some form of evaluation, accountability, and assessments of impact must be part of teacher preparation. In fact it seems reasonable, as good stewardship, to require that programs receiving financial support from the public provide evaluation data. And one would hope that programs themselves would want these data to drive planning and program development and improvement. Unfortunately, entities like NCTQ exist largely because teacher preparation programs have not done the hard work of rigorous, self-critical study, and self-evaluation.

The fundamental challenge is measurement. Everyone (states, accreditation agencies, NCTQ) has a list of standards. But we have precious few measures for those standards and almost no evidence that the available metrics (like SAT scores, PRAXIS scores, or how many courses a student takes) predict how graduates perform as teachers. The rubrics, procedures, and results from accreditation, state certification, NCTQ reviews, and most teacher preparation curricula are merely hypotheses. We have a lot of hypotheses and too few tests of them. My sense is that until and unless the field of teacher preparation owns actual measurement and evaluation of its hypotheticals, it is hard to imagine any systematic gains in impact or quality. No outsourcing of this (to accreditors or to NCTQ) will solve that problem.

We are leaving the NCTQ fold because it has become one more activity that produces little to no value for our students or our program's impact. And it is my own professional view that the field does not need one more compliance game, particularly one more that purports to benchmark quality without any strong evidence to support that claim. It would be far more useful to devote that time and energy to really measuring and determining value and impact.

Popular in the Community

Close

What's Hot