Published on Development Impact

"If I can’t do an impact evaluation, what should I do?” – A Review of Gugerty and Karlan’s The Goldilocks Challenge: Right-Fit Evidence for the Social Sector

This page in:

ImageAre we doing any good? That’s what donors and organizations increasingly ask, from small nonprofits providing skills training to large organizations funding a wide array of programs. Over the past decade, I’ve worked with a wide array of governments and some non-government organizations to help them figure out if their programs are achieving their desired goals. During those discussions, we spend a lot of time drawing the distinction between impact evaluation and monitoring systems. But because my training is in impact evaluation – not monitoring – my focus tends to be on what impact evaluation can do and on what monitoring systems can’t. That sells monitoring systems short.

Mary Kay Gugerty and Dean Karlan have crafted a valuable book – The Goldilocks Challenge: Right-Fit Evidence for the Social Sector – that rigorously lays out the power of monitoring systems to help organizations achieve their goals. This is crucial. Not every program will or even should have an impact evaluation. But virtually every program has a monitoring system – of one form of another – and good monitoring systems help organizations to do better. As Gugerty and Karlan put it, “the trend to measure impact has brought with it a proliferation of poor methods of doing so, resulting in organizations wasting huge amounts of money on bad ‘impact evaluations.’ Meanwhile, many organizations are neglecting the basics. They do not know if staff are showing up, if their services are being delivered, if beneficiaries are using services, or what they think about those services. In some cases, they do not even know whether their programs have realistic goals and make logical sense.”
 

Effective monitoring systems – which is to say, monitoring systems that help programs to improve – collect monitoring data that are credible, actionable, responsible, and transportable. Gugerty and Karlan call these the CART principles. (Don’t confuse these with the SMART criteria for good indicators, which – if you’ve been in this field for a while, you’ve likely encountered. The SMART criteria are useful, but Gugerty and Karlan set their sights higher than the indicators, exploring how to design the monitoring systems themselves.)

Credible monitoring systems collect high quality data and analyze those data correctly. If field officers’ salaries are dependent on their visiting five farmers a week, then simply asking field officers how many farmers they visited – without any follow-up checks – is probably not credible.

Actionable systems collect data that could actually affect how the organization works. (This is a principle that I believe is often forgotten and that had me repeatedly exclaiming “yes!” while reading this book.) The book is rife with stories of organizations over-collecting data that never gets analyzed and – even if analyzed – never gets used. For every piece of data, consider the following three questions:
  • “Is there a specific action that we will take based on the findings?”
  • “Do we have the resources necessary to implement that action?”
  • “Do we have the commitment required to take that action?”

Responsible data collection systems strike the right balance between money spent on data collection and analysis and money spent on implementing programs. Investing in data that are not actionable wastes money that could be spent vaccinating children or educating farmers. But failing to invest in data that could help you deliver programs to children and farmers more effectively is also a waste.

Transportable systems generate knowledge that can be useful for other programs. As the authors acknowledge, the transportable principle is most important for impact evaluations; but they argue that even monitoring systems can help other organizations in the same field to improve, for example, “when organizations share lessons about how to increase the usage rate of effective products or how to most credibly collect data on a particular topic.”

ImageWhat I found most useful about the book: The book lays out clear ideas of how to make sure that data are credible, actionable, and responsible. The chapter on the value of a good theory of change (aka “logic model, logical framework, program theory, or program matrix”) – and seven steps to build one – is excellent. That chapter – plus the case studies – demonstrate how the exercise of developing such a theory, whether at the beginning of program development or well into implementation can change an organization’s strategy for the better.

Even the chapter of applying the CART principles to impact evaluation – an area with which I have quite a bit of familiarity – provided new insights, such as establishing a “shutdown or redesign rule.” The authors pose the example of a water chlorination system. Even if one isn’t in a position to do a full impact evaluation to establish the success of the program (i.e., healthier people because of clean water), monitoring data can establish failure: “The shutdown rule could apply if, in conducting a single round of data collection, it turns out that not a single household chlorinated its water.” If no one is chlorinating, then there won’t be an impact on health, so the program should either be redesigned or shut down. 

The authors provide six detailed case studies that show both successes and failures – the benefits of developing a theory of change and adjusting the program, and the failures of gathering loads of monitoring data that goes nowhere.

Furthermore, the authors make an effort to keep the book lively with relevant comics here and there.  

The final chapters – on the donor perspective – felt more exploratory than the rest of the book. For example, a detailed characterization of “paying for success” programs (e.g., pay for performance or performance-based contracts) highlights how aims of donors and implementers may be aligned, but doesn’t clearly map that to the data needs – which are the focus of the book. And the chapter on retail donors (i.e., individuals like you and me who may wish to give money to an effective organization) highlights problems and more potential solutions than sure answers. But this is a reflection of the state of the field, not a failure of the authors.

Not every program needs an impact evaluation. But every program needs a monitoring system. Now I have a much clearer idea of how to help organizations monitor for success.

Key links

Authors

David Evans

Senior Fellow, Center for Global Development

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000