How do you measure social impact?
We are all striving to affect change in one way or another: change in the way social problems are addressed and/or change in the lives of the constituents we serve. How will we know when we have succeeded? Every social entrepreneur struggles to identify when he or she has moved beyond implementing a good idea to achieving real change. As change agents, the success of our endeavors depends on our ability to demonstrate the impact of our work to staff, funders, clients, other stakeholders and the general public.
Undoubtedly, there are many barriers to measuring social impact—the first being the perception that social change cannot be measured or defined by metrics or data points. Many take the “I know it when I see it” approach; in other words, change is measured by anecdotal evidence gathered when clients and constituents are seen benefiting from services, not from numbers and statistics. At Innovation Network, we believe that qualitative information can be very effective in measuring social change. The challenge is to be systematic in how you collect and use qualitative data to evaluate your work.
The second challenge is the tendency to equate program performance or program activities with impact. Measuring social change is not only about counting the number of people served, number of micro loans made or new health clinics established. These benchmarks can help you confirm if you are on the right track, but they don’t tell the full story about your impact. Measuring impact is akin to asking: We served these people, we made these loans but so what? What change occurred as a result of our work?
To measure impact, you need to draw on the same creative and critical thinking skills used to launch your program. In order to measure social impact, you need to get concrete about what you hope to achieve. It’s your vision—so you have to define your own success.
The basis for your success is the outcomes or expected change you hope to see among clients, communities, systems or organizations as a result of your work.
In general, your outcomes should:
- Represent changes that can logically be expected to result from activities articulated in a logic model,
- Be within the program’s sphere of influence,
- Be generally accepted as valid by various stakeholders of the program,
- Be phrased in terms of change, and
- Be measurable.
Often entrepreneurs are tempted to stay focused on the big goals that they have set out for their programs. However, we have found that it is helpful to think about outcomes along a continuum of change starting with what is in your immediate control at one end to what you hope to accomplish in the long-range at the other end. Think about:
- What changes do you expect to see?
- What changes would you want to see after that?
- What changes would you hope to see after that?
This helps you to ensure that your outcomes are measurable and realistic.
We recently worked with Liza Chambers, a 2004 Echoing Green Fellow. Liza’s organization Soliya is dedicated to promoting intercultural understanding and awareness and galvanizing young adults to act as constructive global intermediaries. For Liza’s Connect program, she has defined the following short term, intermediate and long term outcomes:
Program participants (university students) develop empathy & understanding for alternative perspectives.
Participants educate friends and family based on what they have learned.
The community has increased empathy for alternative perspectives.
Change in attitude
Change in behavior
Change in condition
Identifying outcomes provides structure to your vision by articulating where you want to go and what you hope to achieve. How will you know when get to where you want to go? You need evidence or indicators that signal that you have succeeded in achieving the desired outcomes.
Indicators can be quantitative or qualitative. They should be meaningful, direct, useful, and practical to collect. In creating indicators of your success, think about the following:
- What. Describe the condition, behavior, or characteristic that you will measure.
- Who . Specify the target population you will measure.
- How Much . State the degree of change you expect to see.
- How many . Identify the amount of change among your target population that would indicate a successful level of achievement. This sets the target for your work; base this on an understanding of your baseline and a level of change that is reasonable for your program.
- When. Note the timeframe in which this change should occur.
Examples of Soliya’s indicators are as follows:
What do we want to achieve?
Program participants (university students) develop empathy & understand for alternative perspectives.
Participants educate friends and family based on what they have learned.
The campus community has improved empathy for alternative perspectives.
How will we measure our success?
The majority of participants will be able to express views that are different from their own by the mid-point of the program.
The majority of students report speaking to friends and family by program completion.
Increased attendance for public cultural events, lectures, academic courses, etc. in the year following the program.
Once you have defined your outcomes and indicators, you are well on your way to measuring social impact in a systematic and credible way!
Key Steps in Outcomes Management. 2003. Harry P. Hatry and Linda M. Lampkin. The Urban Institute. This is the first in a series on outcome management for nonprofit organizations published by the Urban Institute. http://www.urban.org/UploadedPDF/310776_KeySteps.pdf
Developing Community-wide Outcome Indicators for Specific Services. 2003. Harry Hatry, Jake Cowan, Ken Weiner, and Linda M. Lampkin. The Urban Institute.
Outcomes Based Evaluations Using the Logic Model. 2002. Center for Substance Abuse Prevention. Substance Abuse & Mental Health Services Administration. A training program about logic models and evaluation. (http://capt.cnsusa.com/docs/OutcomesBased.pdf).
Measuring Program Outcomes: A Practical Approach. 1996. United Way of America. A step-by-step program evaluation manual for agencies supported by United Way. Ordering information available is at: http://national.unitedway.org/outcomes/resources/mpo/). Not available online as a PDF.
About Innovation Network
Innovation Network’s mission is to improve nonprofit results by building evaluation capacity. Working with nonprofit organizations and funders through evaluation consulting, training, Web-based tools, and outreach, we seek to increase evaluation and planning knowledge and skills for the entire nonprofit and philanthropic field, and to build the ability of individual nonprofits to meet their missions. Measure results. Make informed decisions. Create lasting change. For more information, see: http://www.innonet.org/
Deb Levy has more than ten years’ experience in social science research with a strong background in public policy and criminal justice to the organization. At Innovation Network, she provides oversight and technical assistance to nonprofit organizations in the areas of program planning, evaluation, and capacity building.
Kathy Brennan is an experienced team leader with a deep knowledge of social welfare programming and evaluation, and more than ten years’ experience in the nonprofit sector. Ms. Brennan is the lead evaluator for several large-scale initiatives at Innovation Network and has conducted dozens of training efforts on logic model and program evaluation.
Thanks for joining us for this ongoing discussion over the next week. Kathy and I look forward to answering any questions you have and/or engaing in any relevant discussions.
Just yesterday I was asked to explain how I will capture qualitative change in people’s lives. It seems easy to capture quantitative data but it isn’t so easy to measure qualitative changes. I said that I would use life stories or case studies. I wonder what your views are on the subject.
Looking forward to a fruitful discussion.
Thanks for the links to the resources, they are excellent and useful.
Many people agree with you that capturing qualitative change information is more difficult than quantitative information. Qualitative information also tends to add a richness and tell a story that can’t be seen in numbers. Measuring qualitative change requires looking systematically across qualitative data and having set research questions so you know what you are looking for–what would the indicators of qualitative changes in people’s lives be for your program? Once you know what those indicators are you can use interviews, focus groups, even journals and surveys to get at this information. Stories and case studies as you suggest are a great way of painting the picture of what can happen with one person or one family–looking across data over many people invovled in a program can give you information about how successful a program is overall.
I admit I have not been paying attention to planning systematically for qualitative changes. This is a problem with many local organisations in Nigeria (and I daresay, international organisations working in Nigeria as well). I usually spend a lot of time on the quantitative changes, sometimes to meet donor demands, and just wish/hope that I will be able to capture a few qualitative change as a bonus!
Also, I hardly look ‘across data over many people involved in a program’ as you stated. The stories and case studies I collect are usually few and haphazard. I can now understand the difficulties that our partners face when we request for cases/stories – we do not plan well in advance what we want to capture or how we are going to capture it!
Great start, looking forward to more tips.
I have a philosophical question about a possibly destructive side effect of current evaluation practices. The nuances of reality, in all their richness, cannot be crammed into a small mechanical space, and yet that’s precisely what many nonprofits are asked to do by funders looking for evaluation results.
Nonprofit staff and consultants are therefore presented with this quandary: To what extent should we orient toward funders’ requirements (and cramming reality into small boxes), and to what extent should we stay grounded in our own missions? Too far in one direction means no grant funding. Too far in the other direction means that we subvert our missions – focusing on organizational survival, instead of on our organizational purpose. Where do you find the balance point?
I understand your struggle between meeting funder’s requirements for evaluation vs. standing ground with an organization’s mission. At Innovation Network, we take a participatory approach to evaluation in hopes that if an organization is involved in the planning of the evaluation (developing a logic model with goals, creating evaluation questions, and evaluation plans), the results won’t threaten the organization’s mission. In other words, we advocate from the beginning of program planning that organization’s work with their funders to agree upon outcomes- short, intermediate, and long-term. If this occurs, then evaluation results will provide insight into whether the short-term (and possibly the intermediate) outcomes occured and if so, will the intermediate and long-term follow?
If someone wants to give an example, maybe we can work through a chain of outcomes and look at them in relation to an organization’s mission.
In general, do you find funders receptive to your proposed outcomes? Or (as in my experience) do funders tend to want to bend your outcomes (or even flat-out replaces them) to suit their missions? Do you have any suggestions for effectively negotiating outcomes that truly match both your own mission and the funder’s?
My experience is mainly with micro-level projects which allow us to plan with the communities. So far, there has not been any tensions between our outcomes and those of the funders. But in bigger (multi-year, multi-million naira) projects, it usually takes longer to square off expectations and the funders are usually insistent on their outcomes. In some cases, there has been useful negotiations which resulted in win-win situations. In others, the funders have been outright insistent and we have skirted around it (working extra hard to meet some of the expectations of the communities that did not suit the fancy of the funders). That was our lot because our organisation was a donor-dependent local NGO!
Some of your keywords and phrases suggest we have overlapping concerns and experiences…. You say "I am swamped with on-going work. Second, access to internet resources is quite limited where I work…" and you mention "naira". I clicked on your photo hoping to find more information about your projects – but only found your posts. Please tell me more.
I wonder if any of CawdNet’s SIGs (Special Interest Groups)overlap your interests and needs. Regarding problems with Internet access you sound like our other Nigerian associates: "bandwidth challenged". (And I would guess that, like them, you serve communities who are bandwidth poor.)
On the bandwidth rich side of CawdNet (based in UK – but welcoming volunteers anywhere on the Internet) we use the high bandwidth Internet connections we have in our homes. We try to support our bandwidth challenged associates in their attempts to pull information from the Internet. CawdNet is small – and pretty swamped – but hoping that Social Edge will help us to do what we do better and also find more volunteers and donors.
If your community project serves needs that overlap other CawdNet community projects perhaps we can gain strength through learning from each other. You are welcome to any relevant information we are accessing on behalf of our Nigerian projects. Maybe as we get to know each other better we may also discover areas for collaboration.
I look forward to your reply – either here or as a personal email