Syndicate content

surveys

Some advice from survey implementers: Part 2

Markus Goldstein's picture

This is part 2 of a two part blog on what survey implementers would tell the researchers and others who work with them (part one is here).   Before we dive in, I want to reiterate my thank to the folks at EDI and IPA, as well as James Mewera of the Invest in Knowledge Initiative, Ben Watkins at Kimetrica,  and Firman Witoelar at SurveyMeter who took the time to send me really careful thoughts and then answer my queries.   As before, don’t take anything below as something specific any one of them said – I’ve edited, adjusted and merged.   Blame me if you don’t like it.  One final note, as you can see from the list, not everyone one of these is a commercial firm, and some of them do research as well – so not only keep that in mind when filtering the advice, but I’ll abbreviate with SO for survey organization.     
 
Please read this post as me channeling and interpreting their voices.  I am not sure I agree with everything I heard, but I am passing it on.   And all of it gave me food for thought.   Stuff in [italics] is me explicitly responding to a couple of points.   
 

Some advice from survey implementers: Part 1

Markus Goldstein's picture
I have often wondered what the folks who do the surveys I use in my research think of how it is to work with me.   Since I wasn’t sure I had the courage to hear that straight to my face, I wrote to a number of survey folks I knew (and thought highly of) or that other people recommended.   I asked them what they would tell researchers in general.  
 

Some tips on doing impact evaluations in conflict-affected areas

Markus Goldstein's picture
I’ve recently been doing some work with my team at the Gender Innovation Lab on data we collected that was interrupted by conflict (and by conflict here I mean the armed variety, between organized groups).  This got me thinking about how doing an impact evaluation in a conflict situation is different and so I reached out to a number of people - Chris Blattman, Andrew Beath, Niklas Buehren, Shubha Chakravarty, and Macartan Humphreys – for their views (collectively they’re “the crowd” in the rest of this post).   What follows are a few of my observations and a heck of a lot of theirs (and of cou

Getting better access to impact evaluation data

Markus Goldstein's picture

If the data and related metadata collected for impact evaluations was more readily discoverable, searchable, and made available, the world would be a better place.   Well, at least the research would be better.   It would be easier to replicate studies and, in the process, to expand them by for example: trying other outcome indicators; checking robustness; and looking for heterogeneity effects (e.g.

Tell us – is there a missing market for collaboration on surveys between WB staff, researchers and students?

David McKenzie's picture

The thought has occurred to me that there are more people than ever doing surveys of various sorts in developing countries, and many graduate students, young faculty, and other researchers who would love the opportunity to cheaply add questions to a survey. I therefore wonder whether there is a missed opportunity for the two sides to get together. Let me explain what I’m thinking of, and then let us know whether you think this is really an issue or not.