CCBot/2.0 (http://commoncrawl.org/faq/) - Array ( )
Login

A Special Series on Evaluation: Complex Problems, Adaptive Solutions

In trying to solve big, complex problems, how do we figure out which approaches work and which ones don't? This series presents reflections from experts in the field on how we should think about evaluation, once we acknowledge that the world and its problems are complex, dynamic and ever-evolving. How do we get beyond simplistic models of cause and effect, inputs and outputs, and ground our approach to evaluation in a more sophisticated understanding of the world we work in?

 
 
 

The Power of Randomized Evaluation: Understanding Issues, Adapting Solutions

The Power of Randomized Evaluation: Understanding Issues, Adapting Solutions

August 17, 2014 | 2436 views

 

In international development there is a tension between the drive to “scale what works” and the fundamental reality that the world is complex, and solutions discovered in one place often can’t be easily transported to different contexts.

At Innovations for Poverty Action, we use randomized controlled trials to measure which solutions to poverty work and why. We believe that this methodology can help to alleviate poverty, and yet we don’t advocate focusing solely on programs that are “proven” to work in this way. One risk of funding only “proven” and therefore “provable” interventions – the “moneyball of philanthropy” – is that interventions proven to work in one place could be transposed to new situations without attention to context, which could be a disaster. Also, interventions that cannot easily be subjected to rigorous evaluation may not get funding.

The risk of the other extreme – focusing only on the details of a complex local environment – is that we may fail to uncover important lessons or innovative ideas that can improve the lives of millions in other places. Focusing only on the complexity and uniqueness of each situation means never being able to use prior knowledge, dooming one to constantly reinvent the wheel.

Defining complexity

Before going further, it is useful to define what we mean by “complexity.” Complexity refers to something with many parts interacting with each other in multiple ways. As outlined by Owen Barder, it can refer to the complexity of problems, contexts, interventions and, I would add, evaluations.

Regardless of methodology, the goal of any intervention should be to better understand the complexity of the problems it addresses, and armed with this understanding to develop solutions that take this understanding into account and can be adapted to different contexts.

Well-designed randomized evaluations help us do exactly that. They help to resolve the tension between the two development approaches – “scale what works” and “adapt to complexity”.

Randomized evaluations help us understand complex environments and behaviors, by testing hypotheses about why certain behaviors occur. For example, JPAL’s incentives for vaccines experiment aimed to understand whether low immunization rates came from a supply and awareness problem or from a deeper issue on the demand side. To test their hypotheses, researchers evaluated two interventions: a health camp which provided regular access to vaccines and awareness building, and the same plus incentives for each round of vaccination.

Better understanding, better diagnosis

Randomized evaluations also provide opportunities to design and test innovative solutions. Sendhil Mullainathan and Saugato Data observe that “better understanding leads to better diagnosis, which in turn leads to better-designed solutions”. The increasing use of behavioral economics in program design, combined with the rigorous evaluation of innovative ideas using randomized trials, has led to an increased understanding of poverty problems, and an increasing number of innovative solutions. For example, we found that sending text messages to people reminding them to save increased their savings by 6%, and even more if these reminders mentioned people’s savings goals.

If they are well designed, randomized evaluations lead to findings that can be adapted to new contexts. But this cannot be done blindly. The key is to structure experiments so as to understand the mechanism that led to the program’s impact (or lack of impact). This can be done by clearly outlining this mechanism or theory of change as part of the evaluation design and by measuring intermediary variables along the way, not just the final outcome.

It can also be done by testing different variations of an intervention. For example, in the “reminders to save” experiment, researchers tested text messages with and without a mention of the savings goal – to understand whether that piece in itself was important. Often, it takes more than one evaluation to fully understand a problem or mechanism. Our knowledge comes from a body of studies.

If the mechanism is well understood, lessons can be adapted to new situations, but this needs to be combined with a good understanding of local contexts. Smart evaluation tries to understand local contexts and how to adapt lessons learned elsewhere. For example, the lesson from the incentives for vaccines experiment mentioned above is not that one should distribute lentils to parents when they get their children vaccinated (this was the incentive used in Rajasthan). The lesson is that the decision to vaccinate is not just about access or awareness, that parents need an additional push. If one knows the new context well, this lesson can be adapted.

Testing adaptation to new contexts

One can also use randomized evaluations to test different ways of adapting an idea to a new context. For example, several studies showed that focusing instruction at the child’s level is key to improving learning levels, and that this can be done by having volunteers teach remedial classes to children who lag behind. What is the best way to do this? Should it be done during school, or after school hours? In Ghana we tested different ways to adapt this idea, including these two models.

One example of how to think about randomized trials is our recent study in Uganda, where lack of funds for basic school supplies can be a barrier to kids’ education. We had a general idea that a savings education program with lockboxes in schools could work, but didn’t know the most effective way to carry it out.

We tried four versions: with or without involving parents, and either returning students’ savings to them in the form of a voucher that could only be spent on school supplies, or as cash to spend as they liked. To our surprise, we found the less restricted spending – just cash – resulted in more school supplies purchased, but only when combined with family outreach. The kids in that group bought more school supplies and had higher test scores. In other words, we used the randomized evaluation methodology to compare four possible iterations of one idea to one another and found one that seemed to work.

Intelligent adaptation

Does this mean that all kids’ programs should involve parents and be less restrictive? Of course not. Nor does it mean that this specific program should be implemented in Peru or India. It helps us get at the complexity of the problem, and how to approach it – we determined which possible factors might be at work in the complex problem at hand, and then tested them. Regardless of one’s approach, adapting to complexity is more than doing what’s already been done; it means exploring new terrain.

In summary, designing effective interventions is not about opposing proven solutions to adaptive initiatives. It’s about understanding issues, identifying effective ideas and constantly adapting them, intelligently, to different contexts.

 
 
 

Let’s keep this going …

Why not join our growing community of social innovators? You’ll get exclusive content and opportunities delivered straight to your inbox and all the latest details from the Skoll World Forum.

We value your privacy and you can unsubscribe anytime.