THE BLOG
05/31/2013 10:40 am ET | Updated Jul 31, 2013

An Over-worked Scientist and a Sweaty Social Worker

Getty Images

On a Tuesday night, precisely at 10:45 p.m., my brother rushes to his air-conditioned immunology laboratory at Caltech. He puts on his gloves, walks over to the tissue culture hood, and refills the nutrient media in five petri dishes that hold several thousand bone marrow cells. Afterwards, he records observations in his lab notebook and briefly browses online scientific journals for recently published studies on the mouse immune system.

Thousands of miles away, Priya navigates the congested alleyways of the Fakir Bagan slum, home to more than 20,000 individuals. The stifling humidity of Calcutta soaks her clothes with perspiration. Although she strategically bypasses the largest garbage piles and sewers, fetid odors stemming from public bathrooms still permeate the surrounding air. She finally reaches the community health center. Priya works at the clinic, where mothers bring their children for free immunizations on Fridays.

On the surface, it's hard to find an obvious parallel between the lives of an over-worked graduate student and that of a sweaty social worker. However, the contrasting geography, weather, sanitation, environment, and most importantly, context, distinguishing the two situations overshadow a core similarity. The approaches used by those working effectively in the development field are often fundamentally identical to the 'scientific method' used by scientists in academic research. This similarity emerges because both fields tackle important unanswered problems without having easy-to-find solutions published in the back of a textbook. More importantly, it suggests that academic research can serve as a model for those of us working in the development field and, specifically, why we need to continue conducting evidence-based studies and impact assessments on our programs.

Here's a hypothetical comparison between the two fields. My brother and Priya may respectively ask: Why are these bone marrow cells growing indefinitely, or why are children in this slum not coming to the clinic to receive immunizations? Both start working with known information -- an established genetic pathway affecting cell growth or a thorough understanding of the community's cultural traditions. Using this understanding, the next step requires creating a hypothesis for the cause of the problem -- maybe a particular receptor on the cells is overactive or the community is unaware of immunizations. This hypothesis leads them to design an experiment or intervention -- let's engineer a molecule that can inhibit this overactive receptor or let's partner with locals to implement an awareness-building program about immunizations.

The next and final step is often completed only by researchers, and models for those of us working in development what we can do to better reach our goals. After biological experiments, my brother collects and analyzes appropriate data. The purpose is to determine if the molecule has actually stopped the bone marrow cells from growing indefinitely, and confirm that this change is not a side-product of something else. If this analysis proves that the molecule works, he can share this new finding with other scientists working on the same phenomenon. If this molecule fails to stop continuous cell growth, highlighting that his hypothesis was wrong, my brother would head back to the drawing board and use this new information to create another hypothesis.

In the development field, however, more organizations must begin or continue analyzing the efficacy of their interventions after implementing them. Such evidence-driven studies can show if a program is actually working and identify holes that must be filled. Far too may development projects have continued executing the wrong solution, instead of recognizing their fruitless intervention and heading back to the drawing board to generate a new hypothesis.

In the Fakir Bagan slum, Priya would check if mothers are now bringing their children to the clinic for vaccinations. If so, how many? And how do we know it is because of our program and not something else? In the long run, are fewer children falling sick and can we share our model with other organizations?

Impact assessments on pilot programs offer another vital benefit in development work. As many organizations face limited capital, finances, and manpower, they must learn how to efficiently utilize these resources. Conducting an impact assessment on a pilot program can shed light on whether the intervention is worth scaling. While working at Ummeed Child Development Center for the past year, I performed my first impact assessment on a pilot program for this very purpose.

Ummeed provides family-centered care and various other services to families with children with developmental disabilities. They recently piloted a culturally appropriate reading intervention for ten children with autism, cerebral palsy, learning disabilities, and other disorders.

Our assessment showed huge benefits to these children with disabilities and their parents, soaring past our initial observations. Several children's vocabulary dramatically improved. Few parents even started using books to teach their children the alphabet and counting. Nine out of the 10 children were more engaged with using books. Program directors saw better parent-child relationships. These findings showed us that this reading intervention could equip parents to provide invaluable verbal and literacy interactions using books for their children's cognitive development. Additionally, this evidence warranted us to scale this program using our finite resources.

Without data-driven research supplemented with qualitative case studies, we cannot truly know whether a community-based intervention or other initiative in the development sphere is fulfilling its mission. While few interventions do show obvious improvements in the communities they serve, we still must execute a rigorous impact study; trusting our intuition and general observations can often lead to imprecise conclusions about a program, which is detrimental in the long run.

We must note that while many organizations recognize the need for scientific studies, they may not have the necessary resources and skills for probing honest and thorough responses from beneficiaries. Moreover, NGOs exist in a system where honest assessment is not always beneficial to their perseverance. This is due to a malleable set of metrics in the social space. Where a business will sink if it is not hitting its financial metrics, and therefore has an imperative motive to hire an impact consultant, an NGO will more likely manipulate it's metrics rather than admit any shortcomings to their donors.

So then, what's the similarity between my over-worked brother and a sweaty social worker like Priya? If we're talking about how they approach a problem, then everything. But Priya and the rest of us in development need to place a greater focus on conducting impact assessments and rigorous studies on our interventions to stay true to our cause.

Subscribe to the Weird News email.
Truth is stranger than fiction. Step into the world of weird news.