R&D That Makes a Difference

Over the course of my career, I've written a lot of proposals. I've also reviewed a lot, and mostly, I've seen many funded projects crash and burn, or produce a scholarly article or two that are never heard of again.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2015-08-26-1440614916-6542596-BusnPlan_SS_500x390.jpg

Over the course of my career, I've written a lot of proposals. I've also reviewed a lot, and mostly, I've seen many funded projects crash and burn, or produce a scholarly article or two that are never heard of again.

As evidence becomes more important in educational policy and practice, I think it's time to rethink the whole process of funding for development, evaluation, and dissemination.

Here's how the process works now at the federal level. The feds put out a Request for Proposals (RFP) in the Federal Register. It specifies the purpose of the grant, who is eligible, funding available, deadlines, and most importantly, the criteria on which the proposals will be judged. Proposal writers know that they must follow those criteria very carefully to make it easy for readers to know that each criterion has been satisfied.

The problem with the whole proposal system lies in the perception that each proposal starts with a perfect score (usually 100), and is then marked down for any deficiencies. To oversimplify, reviewers nitpick, and if there is much left after the nits have been picked, the proposal wins.

What this system rewards is enormous care and OCD-level attention to detail. It does not reward creativity, risk, insight, or actual utility for schools. Yet funding grants that do not move forward practice at any significant scale do not do much good in an applied field like education (in related fields such as psychology, purely basic research might justify such approaches, but in education this is a hard argument to make). Maybe our collective inability to do research that affects practice on a broad scale explains some of the lack of enthusiasm our political leadership has for research.

So what would I propose as an alternative? I'm so glad you asked. I'd propose that RFPs be explicitly structured to ask not, "Why shouldn't we fund this proposal," but, "Why should we?" That is, proposal writers should be asked to make a case for the potential importance of their work. Here's a model set of evaluation standards to illustrate what I mean.

A. Significance
1. What are you planning to create?
2. What national problem does your proposed program potentially solve?
3. What outcomes do you expect to achieve, and why are these important?
4. Based on prior research by yourself and others, what is the likelihood that your program will produce the outcomes you expect?
5. What is the likelihood that, if your program is successful, it will work on a significant scale? What is your experience with working at scale or scaling up proven programs in educational settings?
6. In what way is your program creative or distinctive? How might it spark new thinking or development to solve longstanding problems in education?

B. Capabilities
1. Describe the organizational capabilities of the partners to this proposal, as well as the capabilities of the project leadership. Consider capabilities in the following areas:
a. development
b. roll-out, piloting
c. evaluation
d. reporting
e. scale-up
f. communications, marketing
2. Timelines, milestones

C. Evaluation
1. Research questions
2. Design, analysis

D. Impact
Given all you've written so far, summarize in one page why this project will make a substantial difference in educational practice and policy.

If we want research and development to produce useful solutions to educational problems, we have to ask the field for just that, and reward those able to produce, evaluate, and disseminate such solutions. Ironically, the federal funding stream closest to the ideal I've described is the Investing in Innovation (i3) program, which Congress may be about to shut down. i3 is at least focused on pragmatic solutions rather than theory-building and it has high standards of evidence. But if i3 survives or if it is replaced by another initiative to support innovation, development, evaluation, and scale-up of proven programs, I'd argue that it needs to focus even more on pragmatic issues of effectiveness and scale. Reviewers should be exclaiming, "I get it!" rather than "I gotcha!"

Popular in the Community

Close

What's Hot