How do we, as development practitioners charged with designing and implementing projects, really know that these projects are delivering on their intended outcomes and improving beneficiaries’ quality of life? And how do we learn from our successes and failures to improve future projects?
How do donors and development partners providing the resources for these projects know that their tax-payer money has been used to support poverty reduction, economic growth, and other goals, or even simply that the finance went to “good use”?
These questions are not new to international development. Measuring development effectiveness has been a priority, and a persistent challenge, for decades. However, when the projects in question are related to “resilience”, the challenge seems to become amplified. Resilience is a complex concept with many competing definitions, and can be observed only when a shock occurs. As a result, it is difficult to measure and monitor.
Many of us feel that we will become better able to improve resilience if we can quantify it. As the saying goes, “what does not get measured cannot be managed.” This lures us to immediately grasp for that universal, perfect indicator, in order to measure resilience and compare projects based on the resilience benefits they generate.
However, the quest for a resilience indicator has been challenging. Many indicators have been proposed; each has its strengths, but also its weaknesses. Particularly worrisome is the fact that any imperfect indicator – with its inability to measure exactly what we want – can easily lead to perverse incentives for practitioners, and favor outcomes that are very different from those intended.
Today we published a short paper entitled “What cannot be measured still needs to be managed.” It reflects on these issues, highlighting how well-intended indicators can lead to perverse incentives. It draws upon detailed examples from other domains in which defining objectives and measuring outcomes is difficult. In education, criminal justice, and health, poorly-defined indicators have led to ill-considered incentives and poor outcomes. Teachers have been found to “teach to the test;” policemen have been found to focus on the crimes that are easiest to solve; and making mortality rates publicly available has made it more difficult to find surgeons to operate on high-risk patients.
Similar problems are likely to result from imperfect resilience indicators, if they are implemented without care. To help avoid these issues, we propose seven “thought experiments” by which a resilience indicator can be tested, so that its potential drawbacks can be identified early on.
For example, counting the number of people benefiting from increased resilience would favor projects delivering small changes to many people, maybe at the expense of more transformational projects that target the most vulnerable. Considering the amounts invested in resilience may favor solutions that are capital-intensive and expensive, at the expense of smarter and cheaper options. And using a traditional cost-benefit analysis based on avoided losses would favor interventions targeting richer households, at the expense of actions for the poorest, who own next to nothing and therefore experience limited losses in absolute terms. Taken together, the seven experiments can inform decisions around whether or how to institutionalize a resilience indicator, based on its benefits and risks.
We need to monitor and evaluate the progress of our projects. But to avoid the pitfalls of pursuing the “perfect indicator,” we suggest adopting a set of complementary approaches that help tell a more comprehensive, and more nuanced, story of how individual projects and portfolios deliver resilience-related outcomes.
These approaches may be more complicated than many would prefer, but could be better able to help us deliver projects that make people and communities more resilient in the long term.