Syndicate content

Learning from failure

Lessons from a crowdsourcing failure

Maria Jones's picture

We are working on an evaluation of a large rural roads rehabilitation program in Rwanda that relies on high-frequency market information. We knew from the get-go that collecting this data would be a challenge: the markets are scattered across the country, and by design most are in remote rural areas with bad connectivity (hence the road rehab). The cost of sending enumerators to all markets in our study on a monthly basis seemed prohibitive.
Crowdsourcing seemed like an ideal solution. We met a technology firm at a conference in Berkeley, and we liked their pitch: use high-frequency, contributor-based, mobile data capture technology to flexibly measure changes in market access and structure. A simple app, a network of contributors spanning the country, and all the price data we would need on our sample of markets.
One year after contract signing and a lot of troubleshooting, less than half of the markets were visited at the specified intervals (fortnightly), and even in these markets, we had data on less than half of our list of products. (Note: we knew all along this wasn't going well, we just kept going at it.)

 So what went wrong, and what did we learn?
 

Lessons from some of my evaluation failures: Part 2 of ?

David McKenzie's picture

I recently shared five failures from some of my impact evaluations. Since this is just scratching the surface of all the many ways I’ve experienced failures in attempting to conduct impact evaluations, I thought I’d share a second batch now too.

Case 4: working with a private bank in Uganda to offer business training to their clients, written up as a note here.

Lessons from some of my evaluation failures: Part 1 of ?

David McKenzie's picture

We’ve yet to receive much in the way of submissions to our learning from failure series, so I thought I’d share some of my trials and tribulations, and what I’ve learnt along the way. Some of this comes back to how much you need to sweat the small stuff versus delegate and preserve your time for bigger picture thinking (which I discussed in this post on whether IE is O-ring or knowledge hierarchy production). But this presumes you have a choice on what you do yourself, when often in dealing with governments and multiple layers of bureaucracy, the problem is your potential for micro-management can be less in the first place. Here are a few, and I can share more in other posts.

When the Juice Isn’t Worth the Squeeze: NGOs refuse to participate in a beneficiary feedback experiment

Guest post by Dean Karlan and Jacob Appel
Dean has failed again! Dean and Jacob are kicking off our series on learning from failure by contributing a case that wasn’t in the book.
 
I. Background + Motivation
Recent changes in the aid landscape have allowed donors to support small, nimble organizations that can identify and address local needs. However, many have lamented the difficulties of monitoring the effectiveness of local organizations. At the same time as donors become more involved, the World Bank has called for greater “beneficiary control,” or more direct input from people receiving development services.
 
While attempts have been made to increase the accountability of non-profits, little research addresses whether doing so actually encourages donors to give more or to continue supporting the same projects. On the contrary it may be that lack of accountability provides donors with a convenient excuse for not giving. It could be that donors give the same amount even with greater accountability. Furthermore, little research indicates whether increased transparency and accountability would provide incentives for organizations to be more effective in providing services. Rigorous research will help determine the impact of increasing accountability, both on the behavior of donors and on the behavior of organizations working in the field.  

Book Review: Failing in the Field – Karlan and Appel on what we can learn from things going wrong

David McKenzie's picture

Dean Karlan and Jacob Appel have a new book out called Failing in the Field: What we can learn when field research goes wrong. It is intended to highlight research failures and what we can learn from them, sharing stories that otherwise might otherwise be told only over a drink at the end of a conference, if at all. It draws on a number of Dean’s own studies, as well as those of several other researchers who have shared stories and lessons. The book is a good short read (I finished it in an hour), and definitely worth the time for anyone involved in collecting field data or running an experiment.