Syndicate content

surveys

Some tips on doing impact evaluations in conflict-affected areas

Markus Goldstein's picture
I’ve recently been doing some work with my team at the Gender Innovation Lab on data we collected that was interrupted by conflict (and by conflict here I mean the armed variety, between organized groups).  This got me thinking about how doing an impact evaluation in a conflict situation is different and so I reached out to a number of people - Chris Blattman, Andrew Beath, Niklas Buehren, Shubha Chakravarty, and Macartan Humphreys – for their views (collectively they’re “the crowd” in the rest of this post).   What follows are a few of my observations and a heck of a lot of theirs (and of cou

Getting better access to impact evaluation data

Markus Goldstein's picture

If the data and related metadata collected for impact evaluations was more readily discoverable, searchable, and made available, the world would be a better place.   Well, at least the research would be better.   It would be easier to replicate studies and, in the process, to expand them by for example: trying other outcome indicators; checking robustness; and looking for heterogeneity effects (e.g.

Tell us – is there a missing market for collaboration on surveys between WB staff, researchers and students?

David McKenzie's picture

The thought has occurred to me that there are more people than ever doing surveys of various sorts in developing countries, and many graduate students, young faculty, and other researchers who would love the opportunity to cheaply add questions to a survey. I therefore wonder whether there is a missed opportunity for the two sides to get together. Let me explain what I’m thinking of, and then let us know whether you think this is really an issue or not.