Published on Development Impact

Notes from the field: Selection bias, friendship and more

This page in:
So I spent the last couple of weeks on internal flights to new places, and bumping (really bumping) around in the back of various pickup trucks.   A couple of interesting experiences:
 
Selection bias trumps friendship.  One of the program implementers we are working with was explaining to us why there had been some (small) contamination.    She related the story of how they had originally recruited people for this business-related program by taking down contact phone numbers to contact them once the treatment and control groups had been identified.   It turned out that one of the people selected for the treatment group had given the mobile number of a friend, since she herself didn't have a phone.   Of course, the phone owning friend was selected for the control group.   So....when the phone owner got the call, she didn't pass on the information to her friend, but rather showed up for the program herself.   As the implementer pointed out, this was a telling form of self-selection for a business program.
 
Sharing results with the implementer.   Of course, one important thing to do before the paper comes out is to talk over the results with the implementer.  I had one of these conversations on this past trip and two things struck me.    First, the program folks pointed out that some of their funders were waiting for the impact evaluation results to fund them.   So clearly some funders out there are looking for this kind of evidence (and even waiting for it).    Second, the results on this evaluation are mixed, and it was pretty critical to get the implementers take on what kind of things could be driving the results.  I left with another couple of regressions to run.  
 
Taking pictures.   Every seminar these days has pictures of program activities.   So, of course, I take more photos than I used to in the past.    This time in Uganda, the tables turned.   We were visiting a rather spectacular farmer.   I whipped out my phone and captured his raised beds and different varietals for posterity.    Then we walked off to look at one of his other fields.   At some point during the walk a nice gentleman with a pretty serious camera and camera bag joined our little group.   At the last field, the farmer asked us to all pose with him for a picture.   When I asked him why, he pointed out that people come, visit his farm and take his picture but don't send him a copy.   He wanted one of us, for his scrapbook.   
 
Learning from variation.   When you visit the field with the implementer, they may have a tendency to take you to the high performers (like our gentleman above) - maybe because they're proud of their work, maybe because they confuse you with a funder.    However, on this same trip we saw a lot of folks who weren't as spectacular as the photographing farmer.   And these folks too had important lessons -- about what might go wrong (survey questions!), about why they were having trouble meeting commitments, and how this might be affecting other participants (e.g. when beneficiaries have to work together).    This seems obvious, but it's the harder side of things to get to, and explore.  
 
The curse of working with programs that want to be evaluated.  Others on this blog (Martin for one) have pointed out that one of the issues of using impact evaluation evidence to gauge the effectiveness of aid is that since impact evaluations are mostly voluntary, you are getting a biased sample.     That came home to me on this trip in a new way.  One of the programs we are working with, a really dynamic and interesting program, is of course also dynamic and interesting to others.   And yes, they are getting funds for expansion.   And of course, there is a ready-made population for the expansion:  the control group.   We had an urgent and constructive discussion about how to preserve the impact evaluation while still expanding.   But yes, if you work with programs where your prior is that they work...you are not alone.   So this kind of selection bias makes our work harder sometimes.
 
Convincing people to do impact evaluations.    Part of the trip was setting up new impact evaluations.   And in discussing with programs why this might be a good idea, two points.     First, as I've pointed out in an earlier post, it's important that this activity be about learning not so much accountability.   This was really evident in one discussion, with an implementer and a funder both present.    Clearly the implementer was worried we were there to judge the program.   And making it clear that this was much more of a learning tool than an accountability tool helped make the value clear, I hope.    Second was an interaction we had with a firm with whom we were trying to get some traction on some labor work.    They got the point of doing an impact evaluation right away.     They, however were puzzled as to why we didn't charge for it.  
 

Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000