Syndicate content

field work

Some tips on doing impact evaluations in conflict-affected areas

Markus Goldstein's picture
I’ve recently been doing some work with my team at the Gender Innovation Lab on data we collected that was interrupted by conflict (and by conflict here I mean the armed variety, between organized groups).  This got me thinking about how doing an impact evaluation in a conflict situation is different and so I reached out to a number of people - Chris Blattman, Andrew Beath, Niklas Buehren, Shubha Chakravarty, and Macartan Humphreys – for their views (collectively they’re “the crowd” in the rest of this post).   What follows are a few of my observations and a heck of a lot of theirs (and of cou

The potential perils of blogging about ongoing experiments

David McKenzie's picture

One of the comments we got last week was a desire to see more “behind-the-scenes” posts of the trials and tribulations of trying to run an impact evaluation. I am sure we will do more of these, but there are many times I have thought about doing so and baulked for one of the following reasons:

When Randomization Goes Wrong...

Berk Ozler's picture

An important, and stressful, part of the job when conducting studies in the field is managing the number of things that do not go according to plan. Markus, in his series of field notes, has written about these (see, for example, here and here) roller coaster rides we call impact evaluations.