Syndicate content

survey design

What’s New in Measuring Subjective Expectations?

David McKenzie's picture

Last week I attended a workshop on Subjective Expectations at the New York Fed. There were 24 new papers on using subjective probabilities and subjective expectations in both developed and developing country settings. I thought I’d summarize some of the things I learned or that I thought most of interest to me or potentially our readers:

Subjective Expectations don’t provide a substitute for impact evaluation
I presented a new paper I have that is based on the large business plan competition I conducted an impact evaluation of in Nigeria.  Three years after applying for the program, I elicited expectations from the treatment group (competition winners) of what their businesses would be like had they not won, and from the control group of what their businesses would have been like had they won. The key question of interest is whether these individuals can form accurate counterfactuals. If they could, this would allow us a way to measure impacts of programs without control groups (just ask the treated for counterfactuals), and to derive individual-level treatment effects. Unfortunately the results show neither the treatment nor control group can form accurate counterfactuals. Both overestimate how important the program was for businesses: the treatment group thinks they would be doing worse off if they had lost than the control group actually is doing, while the control group thinks they would be doing much better than the treatment group is actually doing. In a dynamic environment, where businesses are changing rapidly, it doesn’t seem that subjective expectations can offer a substitute for impact evaluation counterfactuals.

A curated list of our postings on Measurement and Survey Design

David McKenzie's picture
This list is a companion to our curated list on technical topics. It puts together our posts on issues of measurement, survey design, sampling, survey checks, managing survey teams, reducing attrition, and all the behind-the-scenes work needed to get the data needed for impact evaluations.
Measurement

9 pages or 66 pages? Questionnaire design’s impact on proxy-based poverty measurement

Talip Kilic's picture

This post is co-authored with Thomas Pave Sohnesen

Since 2011, we have struggled to reconcile the poverty trends from two complementary poverty monitoring sources in Malawi. From 2005 to 2009, the Welfare Monitoring Survey (WMS) was used to predict consumption and showed a solid decline in poverty. In contrast, the 2004/05 and 2010/11 rounds of the Integrated Household Survey (IHS) that measured consumption through recall-based modules showed no decline.
 
Today’s blog post is about a household survey experiment and our working paper, which can, at least partially, explain why complementary monitoring tools could provide different results. The results are also relevant for other tools that rely on vastly different instruments to measure the same outcomes.

When context matters

Markus Goldstein's picture

coauthored with Sabrina Roshan

Imagine you are out on a pretest of a survey. Part of the goal is to measure the rights women have over property. The enumerator is trying out a question: "can you keep farming this land if you are to be divorced?" The woman responds: "it depends on whose fault it is." Welcome to yet another land where no one has heard of no-fault divorce.

Measuring secrets

Markus Goldstein's picture

One of the things I learned in my first field work experience was that keeping interviews private was critical if you wanted unbiased information.   Why? I guess at the time it should have been kind of obvious to me – there are certain questions that a person will answer differently depending on whom else is in the room. We were doing a socio-economic survey of rural households in Ghana, and we thought that income, in particular, would be sensitive, since spouses tended to share information on this selectively and perhaps in a strategic way.