I hope we can take a moment to reflect on what that extraordinary act was supposed to achieve. It was not just intended to guarantee services to children with disabilities, not just to serve them in the least restrictive environment, not just to see that they had IEPs. The idea was to help children achieve the maximum degree of success they could achieve.
Striving Readers was a serious, well-meaning attempt to solve a very important problem faced by far too many secondary students: difficulties with reading. But next time anyone thinks of doing something on that scale, I hope they will provide preference points in the application process for applicants who propose to use approaches with solid evidence of effectiveness.
Innovation within proven programs can be a good thing, when schools have fully embraced and thoroughly understand a given program and now can see where it can be improved or adapted to their circumstances. However, innovation too early in replication is likely to turn the best of innovations into mush.
The Laura and John Arnold Foundation has announced it is setting up a new Evidence-Based Policy and Innovation division in Washington. Its purpose will be to encourage policy makers to utilize evidence and data in their decision making. But not just encourage. According to the press release, it wants evidence and data to be "the primary factor" in policy makers' decisions.
To reach our 1962 moment will require sustained investment in development, evaluation, and scale-up of proven programs in all subjects and grade levels, and a change of policies to encourage the use of proven programs. I hope our 1962 moment is coming soon. To bring it closer, we have a lot of work to do, in innovation, evaluation, policy, and practice.
At one of the most divisive political moments in our nation's history, in a piece of legislation that itself is controversial and has failed to be reauthorized despite numerous attempts over the past six years, a bipartisan amendment providing for education innovation and research sailed through a Senate committee.
There are a lot of schools in the U.S. that need to be achieving much better outcomes. However, there is a much smaller group of schools in which achievement levels are appalling. The solutions for garden-variety low-achieving schools are arguably different from those for schools with the very worst levels of performance.
The way we can find out what works is to compare schools or classrooms assigned to use any given program with those that continue current practices. Ideally, schools and classrooms are assigned at random to experimental or control groups. That's how we find out what works in medicine, agriculture, technology, and other areas.
Let a thousand (local) flowers bloom, and then send sacks of proven flower seeds back to the locals to use as they see fit. But there is a key step in the middle of this process that only the federal government can play: evaluation, and communicating the results of the evaluations. So it should be in education.
Just as Jeb Bush and others in his party seem to be recognizing that research and development are among the few education-related activities that should remain at the federal level, the Republican-controlled Congress is proposing to eliminate the Investing in Innovation (i3) program and cut back other federal investments in research and development.
The new Congress is working on alternative versions of the Elementary and Secondary Education Act. Republican proposals have a strong emphasis on getting the federal government out of what they believe should be local decisions. Unfortunately, the Republican bill in the House of Representatives takes out a critical support for local control: Investing in Innovation (i3).
President Obama's recently released budget may or may not make its way into law, but it has already made its way into the hearts of those who believe that government programs must be held to account for producing the outcomes they are intended to produce. Red or blue, everyone should want government dollars to make a difference.
The need for evidence should be obvious, but very few federal programs have evidence of effectiveness. Few even have a process for finding out what works and encouraging grantees to use proven approaches, instead of approaches with the same desired outcomes that do not work or whose effects are unknown.
Teachers need not be researchers to contribute to their profession. By participating in networks of like-minded educators -- implementing, continuously improving and communicating about practical approaches intended to improve outcomes of proven approaches -- they play an essential role in the improvement of their profession.
Moving to a focus on evidence-based reform will not solve all of the contentious issues about accountability, but it could help us focus the reform conversation on how to move forward the top 95% of teachers and schools -- the ones who teach 95% of our kids -- and how to put accountability in proper proportion.