To Achieve Big Results From Social Policy, Add This

The growing chasm between unmet social needs and what our social institutions are routinely accomplishing cannot be crossed one small step, or one standardized program, at a time. Something shown to have worked somewhere will not automatically produce the same effects elsewhere.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Continuous Real-Time Learning to After-The Fact Program Evaluations
Co-authored by Anthony Bryk

The challenge of making progress on the country's social problems is attracting strange bedfellows these days. In a rare instance of Washington bipartisanship, Republican Ron Haskins wrote a New York Times op-ed column to applaud the Obama administration for its use of evidence in assessing social programs. Haskins is absolutely right that rigorous evidence should inform social spending, and we welcome his call that the new Congress "reject efforts by some Republicans to cut the Obama administration's evidence-based programs."

We believe, however, that Haskins' vision for how we improve our social programs and policies is too constrained. It's simply not enough to rely solely on experimental evidence to improve outcomes of our efforts -- and the alternative is not guesswork.

There is enormous variability in the impact of social interventions across different populations, different organizational contexts, and different community settings. We must learn not only whether an intervention can work (which is what randomized control trials tell us), but how, why, and for whom -- and also how we can do better. We must draw on a half-century of work on quality improvement to complement what experimental evidence can tell us. And, importantly, the learning must be done not alone by dispassionate experts, but must involve the people actually doing the work, as well as those whose lives the interventions are trying to enrich.

The growing chasm between unmet social needs and what our social institutions are routinely accomplishing cannot be crossed one small step, or one standardized program, at a time. Something shown to have worked somewhere will not automatically produce the same effects elsewhere. A proven program can become a piece of a bridge that could help us cross the chasm. We will reach the other side, however, only when the results of these social experiments are joined with other forms of evidence that emerge from efforts to reform systems, to learn from variations in context and performance, and that draw continuously on the experiences in improving outcomes by those directly involved in this work.

The distinguished epidemiologist Lawrence Green, for example, has pointed out that several thousand controlled trials aimed at reducing tobacco use through individual behavior change had only marginal effects. It wasn't until two states, California and Massachusetts, undertook more complex combinations of strategies involving the health care system, government regulation and taxation on tobacco advertising, and public health programming and messaging that it became clear that the synergy of the many components devoted to a clearly defined result were making the difference. The upshot was a doubling -- and then tripling -- of the annual rate of decline in tobacco consumption in California and Massachusetts relative to the other 48 states.

A current example of putting pieces together comes out of the Carnegie Foundation's Pathways Improvement Communities demonstration. This initiative addresses the problem of the extraordinarily high failure rates among the half-million community college students annually assigned to developmental (remedial) math instruction as a prerequisite to taking degree-level college courses. Traditionally, only about 20 percent of those enrolled ever make it through these courses -- a critical gatekeeper to opportunity.

A network of faculty members, researchers, designers, students and content experts joined to create a new system built on the observation that "structured networks" accelerate improvement. They are a source of innovation, and of the social connections that facilitate testing and diffusion. They provide a safe environment for participants to analyze and compare results and to discover patterns in data. In addition, they involve the people on the ground in generating and analyzing the evidence that comes out of their daily work.

Network participants identified six primary causes for high failure rates, and then tested improvement hypotheses. They used evidence "to get better at getting better," and thereby dramatically improved outcomes -- tripling the student success rate in half the time. And these improvements have occurred for every racial, ethnic and gender subgroup and at virtually every college where the innovation has been taken up.

If, as the health reform guru Atul Gawande contends, "Making systems work is the great task of our generation," we must expand beyond our current preoccupation with evidence from "what works" in the small units that can be experimentally assessed. Achieving quality outcomes reliably, at scale, requires that we supplement carefully controlled, after-the fact program evaluations with continuous real-time learning to improve the quality and effectiveness of both systems and programs.

Anthony Bryk is president of the Carnegie Foundation for the Advancement of Teaching and author of the forthcoming book, Learning to Improve: How America’s Schools Can Get Better at Getting Better.

Popular in the Community

Close

What's Hot