Maddeningly, the anti-poverty impact evaluation craze is precariously close to inflicting an unrealistic hegemony over social change.
No better example exists than the book More Than Good Intentions from the oft-lionized Innovations for Poverty Action institute at Yale University. The book is a genial, readable, entertaining demand for us all to become truth-seekers and truth-tellers — to get real about the scourge of grinding poverty.
The authors are crusaders for hard-nosed research and evaluation (in particular, randomized controlled trials, the social science gold standard) of anti-poverty programs to inform an idealized donor, foundation program officer or social investor of the future, namely, the perfectly rational decision-maker.
The profession’s conceit is that, until an academic evaluator evaluates it, every anti-poverty program is under suspicion. In the closed world of evaluation, what cannot be measured is invisible. What cannot be validated by an evaluator should not be funded.
Responding to the United Nations definition of poverty (“Fundamentally, poverty is a denial of choices and opportunities, a violation of human dignity. It means lack of basic capacity to participate effectively in society.”), the authors ask, “This may be entirely true and accurate. But is it useful?”
Perhaps I am living on a planet populated with people, but I love the U.N. poverty definition. All advocates for economic justice do.
Of course, what the authors mean by “useful” is that human dignity is hard to measure. Social impact measurement is inherently limited to reporting on what it can conceive and quantify.
Consider a neighborhood newspaper anywhere in the world. Computing the impact of a newspaper based on its circulation and advertising revenue produces its valuation, not its value. A paper is a social asset advancing free speech and fostering community cohesion. Our support for a free press is grounded in much more than an economist’s research evalution.
To the book’s credit, it acknowledges the tension between personal commitment to social justice and the impersonal process of evaluation. But, after paying lip service to our complex and very human do-gooder impulses, in a classic example of escalating commitment, the authors persist in adamantly plugging their brand of academic research evaluation for deciding what works and doesn’t in economic development.
At conferences, in research publications and in the offices of funders, the troubling trend is a forming hierarchy of hubris. Soon, in-the-trenches anti-poverty practitioners with long experience, community-based organizations close to their clients, market-based programs with real revenues and real customers, and experimental, innovative initiatives with great promise may be written off as woolly-headed, undisciplined or unscalable simply because they are un-evaluated.
To my way of thinking, this is upside down. Art critics more valued than artists? Sports writers more admired than athletes? Political pundits more powerful than elected leaders?
If foundations stopped funding social impact studies, would it remain a “hot” fad? Would anyone outside the halls of academia really care?
We need evaluators and critics. The rougher and tougher, the better. What we don’t need is academic hegemony over activism.
(Next week: Part Two on microfinance evaluation.)