So recently one of the government agencies I am working with was telling me that they were getting a lot of pressure from communities who had been randomized out of the first phase of a program. The second phase is indeed coming (when they will get the funding for their phase of the project) but the second round of the survey has been delayed – as was implementation of the first round of the program. But that doesn’t make the pressure any less understandable.
Markus Goldstein's blog
So in my quest to understand the gender dimensions of water supply this week, I stumbled upon a nice paper by Florencia Devoto and coauthors. They look at the effects of providing piped water in Tangiers, Morocco. The immediately cool thing about this paper is that they got something quite hard – randomization in an infrastructure project.
This week I want to talk about some interesting work that Gharad Bryan, Shyamal Chowdhury and Mushfiq Mobarak are doing in Bangladesh (policy note and presentation are online, paper coming shortly).
I recently came across a paper by Kelsey Jack which is a white paper for the J-PAL and CEGA Agricultural Technology Adoption Initiative (ATAI). This paper systematically explores the barriers to technology adoption that come from market inefficiencies, what we know about these, and what research is going on (under ATAI) to fill these gaps.
OK, let’s put two blog posts in a pot and stir. In a previous post on measuring consumption, Jed gave us some food for thought, while over on Aid Thoughts, Matt is talking about how a respondent is seeing the enumerator on the sly to conceal land that he doesn’t want his wife to know about. Put it together, and what do you have?
Following up on Michael’s post yesterday, I wanted to add a couple of thoughts.
One of the things I learned in my first field work experience was that keeping interviews private was critical if you wanted unbiased information. Why? I guess at the time it should have been kind of obvious to me – there are certain questions that a person will answer differently depending on whom else is in the room. We were doing a socio-economic survey of rural households in Ghana, and we thought that income, in particular, would be sensitive, since spouses tended to share information on this selectively and perhaps in a strategic way.
As a fair number of impact evaluations I work on are programs designed by governments or NGOs, I often initially have to have a tricky discussion when it comes time to do the power calculations to design the impact evaluation. The subject of this conversation is the anticipated effect size. This is a key parameter – if it’s too optimistic you run the risk of an impact evaluation with no effect even when the program had worked to some (lesser) degree, if it’s too pessimistic, then you are wasting money and people’s time in your survey.
In some joint work with an African government, my colleagues Francisco Campos, Jessica Leino and I were trying to evaluate the impacts of one of their support programs for small businesses. This service was open to anyone who contacted them, but the number of entrepreneurs who knew about the program (and hence who used it) was low. Basically, the way the program worked was that when the entrepreneur came into the office and registered for the program, the implementing agency would assess the needs of the business and then provide the entrepreneur with subsidized access to