Proven Programs vs. Local Evidence

All evidence from rigorous studies is good evidence, as long as it addresses actionable policies or practices that could make a difference for student outcomes. However, there is a big distinction between two kinds of good evidence that I think it is useful to make.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

2016-01-21-1453392124-5441647-map_magnify_500x380.jpg

All evidence from rigorous studies is good evidence, as long as it addresses actionable policies or practices that could make a difference for student outcomes. However, there is a big distinction between two kinds of good evidence that I think it is useful to make.

One kind of good evidence relates to proven programs. These are approaches to teaching various subjects, increasing graduation rates, improving social-emotional or behavioral outcomes, remediating or preventing learning deficits, and so on. Examples might include programs to improve outcomes in preschool, early reading programs, science programs, math programs, bilingual programs, or school-to-work programs. A hallmark of proven programs is that they are designed for replication. That is, if the findings of local, regional, or national evaluations are positive, the program could, in principle, be used elsewhere, perhaps with modest adjustments to local circumstances and needs.

The other type of good evidence, local evidence, is derived internally to a given school, district, city, or state. Such evidence helps policymakers and educators understand their own situation, opportunities, and problems, and to evaluate policies or practices already underway or being considered. Such data may be particularly valued by the local leadership, because it addresses problems they care about, but it is not intended to produce answers to universal problems, except perhaps as a byproduct. For example, local evidence might address the impact of a local change in graduation requirements or policies on access to bilingual programs or teacher certification procedures, without any particular concern for the degree to which these findings might inform other districts or states. Other districts and states may learn from the example of the district or state with the local evidence, but they may never hear about it or may not think it is relevant to their own systems.

Of course, proven programs and local evidence can overlap, as when a given district or state implements and evaluates a replicable program that responds to its own needs, or when a local district collaborates in a national evaluation of a program clearly intended for national application. Yet assessments of proven programs and local evaluations usually differ in several ways. First, research on proven programs is usually funded by federal funders or by companies that developed the program, so national impact is intended from the outset. Second, findings of evaluations of proven programs are usually published, or made nationally available in some form, while local evaluations may or may not be made available beyond an internal report. Every year, Division H of the American Educational Research Association (AERA) makes available award-winning local evaluations of all sorts of programs and policies, and for decades I have reviewed these to find high-quality evaluations of approaches with national significance, which I then include in the Best Evidence Encyclopedia (BEE) if they meet BEE standards. I always find some real gems. These terrific evaluations are rarely published in journals, however, since there is little incentive, time, or resources for district or state research directors to publish them.

Research on proven programs is taking on greater importance because federal initiatives such as Investing in Innovation (i3) are producing and evaluating such programs, and practical programs such as School Improvement Grants (SIG) and Title II SEED grants now encourage use of programs with strong evidence of effectiveness. As federal programs increasingly encourage use of proven programs when they are appropriate, research evaluating proven programs will become more central to evidence-based reform.

Proven programs and local evaluations play different, complementary roles in education reform. The difference is something like the difference in medicine between research evaluating new drugs, devices, and procedures, on one hand, and research on the operations and outcomes of a given hospital, group of hospitals, or state health system, on the other. When a new drug, for example, is found to be effective for patients who fit a particular profile in terms of diagnosis, age, gender, and other conditions, then this drug is immediately applicable nationwide. If it is clearly more effective than current drugs and has no more side effects or other downsides, the new drug may become the new standard of care very quickly, throughout the U.S. and perhaps the world.

In contrast, local evaluations of hospitals and medical systems are likely to inform the local leadership but are less likely to inform the whole country. For example, a local evaluation might check levels of bacteria in emergency rooms, note the time it takes for ambulances to get from car accidents to the hospital, or assess patient satisfaction with their treatment, and then measure changes in these factors when the hospital implements new procedures intended to improve them.

Both types of research and development are valuable, but each has its own particular benefits and drawbacks. Studies of proven programs are more likely to be published or at least made available in the Education Resources Information Center (ERIC) at the Institute of Education Sciences (IES) or on web sites, as noted earlier, especially if academics were involved (academics publish or perish, of course). If positive outcomes on learning are found, proven programs are more likely to have capacity to go to scale nationally; local districts have little incentive or capacity to do this beyond their own borders. Creators of proven programs are likely to have a longstanding interest in their program, while interest in local evidence may evaporate when the superintendent who commissioned it leaves office. Proven programs are likely to contribute to national evidence or experience about what works, while local evaluations may be done to solve a local problem, with little interest in how the findings add to broader understandings.

On the other hand, local evaluations are more likely to engage local decision makers and educators from the beginning, and therefore to benefit from their wisdom of practice. Because the local leadership was involved all along, they may have greater commitment to obtaining good data and then acting on it. Local evaluations exist in a particular context, which may make the findings of interest in that context and in other places with similar contexts. For example, educators in El Paso, Texas, are sure to pay a lot more attention to research done in El Paso or Laredo or Brownsville than to research done in Philadelphia or Travers City, Michigan, or even Phoenix or Los Angeles.

Proven program research and local evaluations are not in conflict with each other, but it is useful to understand how they differ and how they can best work together to improve outcomes for students. As we build up stronger and broader evidence of both kinds, it will be important to learn how each contributes to learning about optimal practice in education.

Popular in the Community

Close

What's Hot