In my previous post, I argued that metrics measure something, but not everything. Let's take a look at what a basic, metrics-based "logic model" looks like (though note that Charity Navigator's new model is much more extensive and challenging than the more streamlined model I am suggesting here):
- Inputs (what you bring to the table as resources: staff, funds, expertise)
- Activities (programs and services; what you do)
- Outputs (things that can be measured -- numbers of people you serve, units of housing you build, meals you provide, numbers of classes you conduct)
- Outcomes (results -- impact you have in the short, medium and long-term)
Good, but not good enough. For the model to be complete, it needs to begin with a description and analysis of the community in which you work and the specific challenges you face. If you want to know, at the end of the process, what impact really means, you first have to know, and state, what the conditions are in which your work takes place and out of which it emerges. Describing these is a complex task -- sometimes even a moving target -- that doesn't easily lend itself to metrics.
In addition, how you assess your results will depend on what you value. If, at the end of the line, you are measuring something intangible like the resiliency or grit of vulnerable children who have grown up in poverty, you will have a greater challenge before you than will an organization seeking, say, to measure an increase in the rate of employment for job-seeking adults, where numbers are their friends. (This is not to say that the work is harder, only that the task of assessing the work is.) You have to make sure that you have identified grit and resiliency, and any other critical life skills, as core values, and you have to explain why they are.
As citizens and donors, we should do what we can to make sure that those organizations are working to build more creative communities, and to devise programs to deal with extremely challenging (if not, thus far, intractable) social problems, are not excluded because their outcomes are not as easy to measure as others. If I am visiting a community center in Washington, DC's Ward 8 where the average family income is $9100 a year, I should not be looking at outcomes the same way I would if I were visiting a community where the somewhat better-off youngsters need a smaller boost in order to be successful. The hill is steeper in some places than it is in others, and we have to take that into account.
At the Catalogue for Philanthropy: Greater Washington, we have approached these questions in what is, given the direction that evaluation appears to be taking, a rather unusual way. We have gathered the community of professionals in the field -- from foundations, corporate giving programs, peer nonprofits, government agencies and the philanthropic advisory community -- and asked them to evaluate applicant nonprofits. Our review process has three stages: programmatic review (the conditions you address, the programs you have created, the impact you have); financial review (reasonable projections of income and expenses; diversified funding; transparency); and site visits (reviewers are asked to share their experience of previous visits, not to visit anew).
Some 120 individuals participate annually, sharing their expertise and direct knowledge. Communities have this knowledge, but it is rarely aggregated or shared with the public at large. We share it in our annual print catalogues and, of course, online, and we are able to do what the rating entities cannot do: actually evaluate need, program quality, and impact -- without overburdening community-based nonprofits that, by and large, lack the resources to perform extensive evaluations themselves.
Creating communities of knowledge -- actually pooling the know-how of people who have expertise in the field -- seems like an obvious thing to do in the service of philanthropy, especially in an era in which knowledge-sharing has become so much easier. It means, too, that we can ask questions that don't lend themselves to easy answers because we can use the brainpower of the community to identify the nonprofits that are doing the best work. There is no reason why this model could not be shared, and why there could not be a Catalogue for Philanthropy in every region of the country -- something we hope to make happen in the not-too-distant future. (A note: the Catalogue focuses on community-based nonprofits with budgets below three million. These are not, by and large, the ones reviewed by Charity Navigator, though this is a category into which the great majority of all nonprofits falls.)
For the moment, though, nonprofits need to remember that -- unless they are primarily reliant on the U.S. Government, in which case they had better pay attention to its model -- most individual donors are not themselves professional givers. Many are driven more by their desire to give back, their personal passions, and their wish to make a difference and than they are by evidence-based impact assessments.
This does not mean that data and measurement do not matter or that a reasonable approach to evaluating impact should not be part of what foundations are funding and even teaching. But charities also need to find a way to assess their work in a manner that does justice to its complexity, and then translate what they learn into an account that will have meaning and power for individual donors whose contributions make up nearly three quarters of all donations. We should keep in mind that it is not just the good work we do that matters, but also the speaking and writing about it -- the sharing of it -- that counts. We need to train ourselves and teach others how to be agents of the imagination, ready and willing and equipped to tell compelling stories about the differences for the better that philanthropic work makes.
The task is a challenging, but essential, one. It needs more attention than it has received and I intend to address it in future posts.
Follow Barbara Harman on Twitter: www.twitter.com/@cataloguedc