Syndicate content

"If I can’t do an impact evaluation, what should I do?” – A Review of Gugerty and Karlan’s The Goldilocks Challenge: Right-Fit Evidence for the Social Sector

David Evans's picture

Are we doing any good? That’s what donors and organizations increasingly ask, from small nonprofits providing skills training to large organizations funding a wide array of programs. Over the past decade, I’ve worked with a wide array of governments and some non-government organizations to help them figure out if their programs are achieving their desired goals. During those discussions, we spend a lot of time drawing the distinction between impact evaluation and monitoring systems. But because my training is in impact evaluation – not monitoring – my focus tends to be on what impact evaluation can do and on what monitoring systems can’t. That sells monitoring systems short.

Mary Kay Gugerty and Dean Karlan have crafted a valuable book – The Goldilocks Challenge: Right-Fit Evidence for the Social Sector – that rigorously lays out the power of monitoring systems to help organizations achieve their goals. This is crucial. Not every program will or even should have an impact evaluation. But virtually every program has a monitoring system – of one form of another – and good monitoring systems help organizations to do better. As Gugerty and Karlan put it, “the trend to measure impact has brought with it a proliferation of poor methods of doing so, resulting in organizations wasting huge amounts of money on bad ‘impact evaluations.’ Meanwhile, many organizations are neglecting the basics. They do not know if staff are showing up, if their services are being delivered, if beneficiaries are using services, or what they think about those services. In some cases, they do not even know whether their programs have realistic goals and make logical sense.”
 

Weekly links July 27: Advances in RD, better measurement, lowering prices for poop removal, and more...

David McKenzie's picture
  • Matias Cattaneo and co-authors have a draft manuscript on “a practical guide to regression discontinuity designs: volume II”. This includes discussion of a lot of practical issues that can arise, such as dealing with discrete values of the running variable, multiple running variables, and geographic RDs. Stata and R code are provided throughout.
  • Great Planet Money podcast on the Poop Cartel – work Molly Lipscomb and co-authors are doing to lower prices for emptying toilets in Senegal.
  • A paper on how to improve reproducible workflow – provides an overview of different tools for different statistical software packages, as well as advice on taskflow management, naming conventions, etc.
  • J-PAL guide on measuring female empowerment
  • Reviewing a paper that you have already reviewed before? This tweet by Tatyana Deryugina offers a good suggestion of using a pdf comparison tool (she suggests draftable) to compare pdfs to see what has changed

Are we over-investing in baselines?

Alaka Holla's picture

 
When I was in second grade, I was in a Catholic school, and we had to buy the pencils and pens that we used at school from a supply closet. One day I felt like getting new pencils, so I stood in line when the supply closet was open and asked for two. Before reaching for the pencils, the person who operated the supply closet, Sister Evangelista, told me a story about her time volunteering in Haiti, how the children she taught there used to scramble about in garbage heaps looking for discarded pieces of wood, charcoal, and wire so that they could make their own pencils. I left the closet that day without any pencils and with a permanent sense of guilt when buying new school supplies.
 
I now feel the same way about baseline data. Most of the variables I have ever collected – maybe even 80 percent – sit unused, while only a small minority make it to any tables or graphs. Given the length of most surveys in low- and middle-income countries, I suspect that I am not alone in this. I know that baselines can be useful for evaluations and beyond (see this blog by David McKenzie on whether balance tests are necessary for evaluations and this one by Dave Evans for suggestions and examples of how baseline data can be better used). But do we really need to spend so much time and resources on them?  
 

Weekly links July 13th....Friday the 13th

Markus Goldstein's picture
And here are the weekly links for your Friday the thirteenth:
  • Don't be afraid, we're just hiring:   DIME is looking for a field coordinator based in Peru, and two research assistants based in Washing (position one and two).  
  • Was that a whisper I heard?  Over at the CGD Blog, Sarah Rose goes hunting for signs of the use of evidence in RFPs from a large aid agency.   
  • Just don't look under the bed:  On Goats and Soda, a nice piece on Banerjee et. al's work on using postcards to reduce leakage from a huge social program in Indonesia.   
  • And should you really be afraid because it's the thirteenth?   Livescience debunks the odds that you'll be in the car wreck (British humor strikes again) and National Geographic explains why you need to leave the house...now.  So stop your triskaidekaphobia before you hurt yourself.  .
 

Pages