Syndicate content

data collection

Electronic versus paper-based data collection: reviewing the debate

This post was co-authored by Sacha Dray, Felipe Dunsch, and Marcus Holmlund.

Impact evaluation needs data, and often research teams collect this from scratch. Raw data fresh from the field is a bit like dirty laundry: it needs cleaning. Some stains are unavoidable – we all spill wine/sauce/coffee on ourselves from time to time, which is mildly frustrating but easily discarded as a fact of life, a random occurrence. But as these occurrences become regular we might begin to ask ourselves whether something is systematically wrong.

Issues of data collection and measurement

Berk Ozler's picture
About five years ago, soon after we started this blog, I wrote a blog post titled “Economists have experiments figured out. What’s next? (Hint: It’s Measurement)” Soon after the post, I had folks from IPA email me saying we should experiment with some important measurement issues, making use of IPA’s network of studies around the world.

A curated list of our postings on Measurement and Survey Design

David McKenzie's picture
This list is a companion to our curated list on technical topics. It puts together our posts on issues of measurement, survey design, sampling, survey checks, managing survey teams, reducing attrition, and all the behind-the-scenes work needed to get the data needed for impact evaluations.
Measurement

What can marketing experiments teach us about doing development research?

David McKenzie's picture

The March 2011 issue of the Harvard Business Review has “a step-by-step guide to smart business experiments” by Eric Anderson and Duncan Simester, two marketing professors who have done a number of experiments with large firms in the U.S. Their bottom line message for businesses is: