I want to do good, but I don't want to waste my time or money doing it. You, too?
In the action world of volunteering, organizing and check-writing, anti-poverty evaluation feels paralyzing. Mostly we ignore the evaluators because in large measure, the research studies inconclusively nitpick. This feature of that program works better than this or that one. Or, that program oversells itself. Or, a study duplicates what seasoned practitioners already know. Or, the ubiquitous observation that more studies are needed.
Evaluations rarely guide us in making the big choices between and among the various sectoral interventions, such as health (primary care, prevention, training health professionals or building hospitals?) versus agriculture (sustainable small farms, treadle pumps or massive irrigation projects?). Or, small business development versus clean water? And what about the relative impact between energy, education or elusive governmental reform?
And, often the research findings present bogus choices. If your goal is increased school attendance, surely deworming children and better teachers and free school uniforms and sanitary napkins for girls and, and, and... are all necessary ingredients for success.
Moreover, the house of mirrors in which evaluators reflect on the best research methodology, and whose data is more valid, and which studies are replicable and more rigorous is never-ending. Every approach comes with cautions and caveats. For a smart, thoughtful taste of it, the best thinker on the topic is David Roodman of the Center for Global Development. From afar we respect the integrity of open scholarship, but parsing the academic debate for an actionable poverty-reduction plan in the here and now becomes a diversionary fool's errand.
Scholarly hubris disheartens us. As How Matters editor Jennifer Lentfer dissuades us, "Recognize that this ephemeral life is governed by a multitude of forces. Control is an illusion. Scientists are wrong all the time."
For a counter view, "More Than Good Intentions," a new book from Yale's Innovations for Poverty Action (IPA), is a ringing manifesto for scientific evaluation. "[The authors] want us to believe that Randomized Control Trials (RCTs) are an essential tool in identifying solutions to problems through research," blogs Guy Stuart, a public policy lecturer at Harvard's Kennedy School. "In fact, there is clear evidence that they think this is the only way to do research. They continuously, and condescendingly, equate 'rigor' with the use of RCTs."
Anti-poverty programs need to improve operations and reach more poor people, but the messianic scientific evaluation message snubs the many non-academic ways we get constructive feedback -- trial and error, marketplace feedback, listening to clients, mimicking competitors, etc. Are these more or less valid?
Evaluation research is costly. IPA, for example, operates in over 40 countries and employs 500 staff with an annual budget of $25 million. Does it make the rest of us honorary members of the Flat Earth Society when we wonder if RCTs might become the Academic and Evaluation Consultant Full Employment Act?
I am curious about one thing: has anyone done a randomized controlled trial to determine if research evaluation centers are having measurable impact in the world of social change?
Next week: Part 4, on practical evaluation issues...
Follow Jonathan Lewis on Twitter: www.twitter.com/CafeImpact