Rethinking Accountability for Adaptive Initiatives
Associate Director, Center for Evaluation Innovation
August 17, 2014 | 856 views
In trying to solve big, complex problems, how do we figure out which approaches work and which ones don't? This series presents reflections from experts in the field on how we should think about evaluation, once we acknowledge that the world and its problems are complex, dynamic and ever-evolving. How do we get beyond simplistic models of cause and effect, inputs and outputs, and ground our approach to evaluation in a more sophisticated understanding of the world we work in?
Unless we reframe what it means for social innovations to be accountable, we risk squelching the very creativity and ingenuity that are crucial to their success.
Recognizing that difficult problems often require long-term transformative solutions, many funders and non-profits are adopting innovative strategies that are complex and dynamic, with goals and activities that emerge along the way. Traditional formative and summative approaches to evaluation are not a good fit for these adaptive approaches. In fact they can be counterproductive, subverting innovation instead of supporting it.
A better method for evaluating adaptive initiatives is developmental evaluation. It embeds ongoing evaluative inquiry into an initiative and offers real-time data and learning about innovations as they evolve, supporting new developments where next steps are unknown.
Not “just about learning”
The buzz around developmental evaluation is growing. But some have misunderstood it to be “just about learning” and not about accountability for results. This is a mistake, which can best be understood by looking at how accountability has traditionally been conceptualized and then reframing it for adaptive strategies.
Billions in public and private funds have been spent on programs that have produced limited benefits. Smart, committed and compassionate donors and non-profits have dedicated years to programs that feel effective—and about which they can tell many stories of success for individual participants. But on rigorous examination, many of these programs have proved largely inconsequential for their intended beneficiaries.
There is a long history of external criticism of these failings. Citizen groups, social critics, academics and watchdogs have pressured donors and non-profits to be more serious, thoughtful and rigorous. From this emerged the call to “be accountable for results”.
Two kinds of failure
When a program fails to achieve its objectives, being accountable for results requires asking what obstacle is getting in the way of success and then adapting the work to overcome it. Two types of failures generally get in the way. The first is theory failure — the idea was wrong, and no causal relationship exists between the strategies used and the desired outcomes. The second is implementation failure — the theory might be right, but the effort lacks the resources and capacities needed to produce results.
These failures suggest the need for both smarter planning and better implementation, and many funders began associating accountability with certain approaches. To address theory failure, they improved program design through strategic planning supported by research, logic models, and theories of change. To address implementation failure and ensure that plans were properly followed, they instituted monitoring via progress reports, site visits, dashboard metrics, and formative evaluation. To address both theory and implementation failure, summative evaluation and outcomes measurement became mechanisms for ensuring ultimate accountability for results.
These traditional accountability mechanisms are a great fit for stable interventions. But because they focus primarily on the upfront quality of the plan and its faithful implementation, they do not address the kinds of failures that affect complex initiatives, where it is impossible to plan everything in advance. In fact, these mechanisms might actually reduce chances for success because they incentivize sticking to pre-set plans long after those plans have stopped being relevant.
How can we re-think accountability to guard against the kinds of failures that complex change efforts encounter? Adaptive initiatives are at risk of failing when innovators ignore the shifting conditions around them and fail to adapt their theories and implementations. Others fail when innovators lose sight of their long-term goals, even evolving ones, or do not track results and apply lessons that emerge along the way.
Did we act as effectively as possible?
The fundamental accountability question for adaptive initiatives should be: Did we act as effectively as possible? Unlike the more traditional accountability question (did we do what we planned to do?), it allows plans to evolve and strategies to shift. It holds innovators accountable for collecting systematic data about their actions, learning from them, and adapting intelligently; not for making plans under the illusion of predictability and precision and blindly adhering to them.
Developmental evaluation is the best approach for ensuring that adaptive initiatives act as effectively as possible (i.e., are accountable). When optimal next steps cannot be predicted in advance, developmental evaluation uses real-time testing, feedback, learning, and adjustment to suggest the most effective way forward.
As evaluator Patricia Rogers said, “The acid test of a good accountability system is that it encourages responsibility and promotes better performance.” For adaptive initiatives, developmental evaluation offers a system that can pass that test, where other more traditional monitoring and evaluation approaches will fail.