Thinking We Know What We Test: Justin Snider

Justin Snider: Thinking We Know What We Test
French students work on the test of Philosophy as they take the baccalaureat exam (high school graduation exam) on June 18, 2012 at the Pasteur high school in Strasbourg, eastern France. Some 703.059 candidates are registered for the 2012 session. The exam results will be announced on July 6, 2012. AFP PHOTO / FREDERICK FLORIN (Photo credit should read FREDERICK FLORIN/AFP/GettyImages)
French students work on the test of Philosophy as they take the baccalaureat exam (high school graduation exam) on June 18, 2012 at the Pasteur high school in Strasbourg, eastern France. Some 703.059 candidates are registered for the 2012 session. The exam results will be announced on July 6, 2012. AFP PHOTO / FREDERICK FLORIN (Photo credit should read FREDERICK FLORIN/AFP/GettyImages)

This piece comes to us courtesy of The Hechinger Report.

An op-ed in The New York Times on August 20th, "Testing What We Think We Know," argued that many medical procedures are carried out in the United States despite a very thin evidence-base for their efficacy. It's high time to invest more in research, the author wrote, to figure out first what actually works. The op-ed's author, H. Gilbert Welch, is a professor of medicine at the Dartmouth Institute for Health Policy and Clinical Practice and a co-author of Overdiagnosed: Making People Sick in the Pursuit of Health (2011).

Welch's op-ed about the field of medicine could just as easily have been about the field of education (but then, would The Times have published it?). The problems besetting both are strikingly similar. In that spirit, what follows is a riff on Welch's op-ed, and it'll likely make sense only if you first read "Testing What We Think We Know."

By 2010, many politicians were recommending top-down accountability to healthy schools and rigorous assessments to determine the effectiveness of older teachers. Both interventions had become standard educational practice.

But in 2020, a randomized trial showed that top-down accountability caused more problems (more teaching to the test and cheating) than it solved (fewer bad schools and under-educated students). Then, in 2029, trials showed that top-down accountability led to many unnecessary tests and had a dubious effect on school improvement.

How would you have felt--after over two decades of following your elected official's advice--to learn that high-quality randomized trials of these standard practices had only just been completed? And that they showed that both did more harm than good? Justifiably furious, I'd say. Because these practices affected millions of American schoolchildren, they are locked in a tight competition for the greatest educational error on record.

The problem goes far beyond these two. The truth is that for a large part of pedagogical practice, we don't know what works. But we pay for it anyway. Our annual per capita K-12 educational expenditure is now over $11,000. Many countries pay half that--and enjoy similar, often better, outcomes. Isn't it time to learn which practices, in fact, improve our educational system, and which ones don't?

To find out, we need more education research. But not just any kind of education research. Education research is dominated by research on the new: new tests, new technologies, new disorders and new fads. But above all, it's about new markets.

We don't need to find more things to spend money on; we need to figure out what's being done now that is not working. That's why we have to start directing more money toward evaluating standard practices--all the tests and treatments that policymakers are already pushing.

There are many places to start. Value-added assessments are increasingly finding microscopic abnormalities in the teacher lounge called M.U.T.S., or Maybe Underperforming Teachers. Currently we treat them as if they were invasive cancers, with public shaming, firing and school closures. Some elected officials think this is necessary, others don't. The question is relevant to more than 3.5 million teachers each year. Don't you think we should know the answer?

Or how about this one: How should we screen for underperforming students? The usual approach, standardized testing, is simple and cheap. But more and more students and parents are opting out of public schools--over five million students attend private schools alone. And 1.5 million are home-schooled. Untold thousands go to virtual schools, where they learn at home in front of computers. These options are neither simple nor cheap. Which is better? We don't know.

Let me be clear, answering questions like these is not easy. The Department of Education is in fact preparing to take on the question of whether underperforming youngsters can be made to perform like their peers. The trial, which will involve up to 50 million students, will last a decade and surely cost billions of dollars.

Research like this takes more than grant money. For starters, it takes a research infrastructure that monitors what standard practice is--data on what's actually happening across the country. Because of PISA, we have a clear view for students aged 15, but it's a lot cloudier for those under or over 15. Basic questions like how common illiteracy is and what testing is done to determine rates of illiteracy are unanswerable.

It also takes a research culture that promotes a healthy skepticism toward standard pedagogical practice. That requires teacher-researchers who know what standard practice is, have the imagination to question it and the skills to study it. These teachers need training that's not yet part of any education school curriculum; they need mentoring by senior researchers; and they need some assurance that investigating accepted approaches can be a viable option, instead of career suicide.

We have to move quickly. The administrative demands of teaching, on one side, and the competition for school funding on the other, make it increasingly difficult for teachers to instruct students. They become isolated from standard practice, and their ability to study it diminishes. School leaders who are well positioned to study these issues are increasingly directed toward enhancing productivity--questions about how can we do this better, faster or more consistently--instead of questions about whether the practices are warranted in the first place.

Here's a simple idea to turn this around: devote 1 percent of educational expenditures to evaluating what the other 99 percent is buying. Distribute the research dollars to match the instructional dollars. Figure out what works and what doesn't. The Institute of Education Sciences (created as part of the Education Sciences Reform Act of 2002) is supposed to tackle questions of direct relevance to students and teachers and could take on this role, but its budget--less than 0.003 percent of total spending on education--is far from sufficient.

A call for more educational research might sound like pablum. Worse, coming from an educational researcher, it might sound like self-interest (cut me some slack, that's another one of our standard practices). But I don't need the money. The system does. Or if you prefer, we can continue to argue about who pays for what--without knowing what's worth paying for.

Justin Snider is a contributing editor at The Hechinger Report. He is an advising dean at Columbia University, where he also teaches undergraduate writing. Snider's research interests include school reform, press coverage of education, urban politics, mayoral control and transatlantic relations. Previously, he taught high school English and advised student publications in the United States, Austria and Hong Kong. A California native, Snider is a graduate of Amherst College, the University of Chicago, the University of Vienna and Harvard University.

Before You Go

Popular in the Community

Close

What's Hot