- I liked the recent Planet Money podcast #698 (a long way home) – there is an interesting discussion of why a lottery is held for access to a housing assistance program in Connecticut, and how they ended up with a lottery rather than other systems of allocating resources – and a great quote about the mishmash of anti-poverty programs in the U.S. which, paraphrasing, is basically “it is not like Congress ever sat down and said what is the best use of the money we set aside to fight poverty” but rather how many different programs have come up over time, all with their own rules and constituencies.
- The latest Journal of Economic Perspectives has a symposium on inequality beyond income (US focused) and a paper on the billion prices project that I linked to a blog post on last week
- Should policy seek to promote small firms or large ones in Africa? Frances Teal on the CSAE blog: “Policy rhetoric focuses on the problems faced by small firms. Data from Ghana over the period for which we have it suggests that it is large firms that face the problems. Unloved possibly because they are not seen as beautiful they are vital for the output of the sector. Policy, not for the first time in Africa, seems to be focused on completely the wrong problem.”
- The Los Angeles Review of Books has a longish discussion on placebo effects in reviewing an anthology on placebos, and how they really don’t work as much of the time in medicine as many people think, and how the term might be over-used in social sciences.
- New in the working paper series: Is living in African cities expensive? Using data from the “2011 round of the International Comparison Program. Readjusting the calculated price levels from national to urban levels, the analysis indicates that African cities are relatively more expensive, despite having lower income levels. The price levels of goods and services consumed by households are up to 31percent higher in Sub-Saharan Africa than in other low- and middle-income countries, relative to their income levels. Food and non-alcoholic beverages are especially expensive, with price levels around 35 percent higher than in other countries.”
Researchers put a lot of effort into developing survey questionnaires designed to measure key outcomes of interest for their impact evaluations. But every now and then, despite efforts piloting and fine-tuning surveys, some of the questions end up “not working”. The result is data that are so noisy and/or missing for so many observations that you may not want to use them in the final analysis. Just as pre-analysis plans have a role in specifying in advance what variables you will use to test which hypotheses, perhaps we also want to specify some rules in advance for when we won’t use the data we’ve collected. This post is a first attempt at doing so.
- Feeling bad about your latest rejection? Johannes Haushofer has bravely posted a CV of failures (inspired by this Naturejobs column) listing his papers that have been rejected, scholarships he was rejected for, PhD programs that turned him down, etc. - making the good point that failures are often invisible, while success is visible. Or just remind yourself that Akerlof’s Market for Lemons paper was rejected 3 times before it was published, and that many other classic papers in economics by Nobel laureates suffered similar fates.
- Nice summary in VoxEU of some of the lessons emerging from the Billion Prices project
- From 538, a discussion on basic incomes, including a cautionary tale about results on one outcome (that wasn’t the main one) driving policy decisions “While the purpose of the NIT pilots was to observe changes in work effort, an unrelated phenomenon caught the eye of critics: divorce. Controversy erupted when data from the Seattle and Denver studies seemed to show an increase in the divorce rate among participants (those findings were later discovered to be the result of a statistical error). The press spun wild stories, and the political credibility of NIT — and of basic income, for that matter — began to unravel.” And also how an economist with the delightful surname of Forget delved into data files on a forgotten Canadian program.
- development impact links
Consumption or income, valued at prevailing market prices, is the workhorse metric of economic welfare – poverty is almost universally defined in these terms. In low- and middle-income countries these measures of household resource availability are typically assessed through household surveys. Yet the global diversity in survey approaches is vast, with little rigorous evidence concerning which particular approach yields the most accurate resource estimate. (Indeed there may be no one approach that best suits every context – more on this below.)
With this question in mind, Kathleen Beegle, Joachim DeWeerdt, John Gibson, and I conducted a survey measurement experiment in Tanzania that randomized common survey approaches to consumption measurement across a representative sample of households in Tanzania. Previous papers have explored the relative performance of the approaches in terms of mean consumption, inequality, poverty, and the prevalence of hunger (see these papers here, here, and here). Our new working paper seeks to push this data further to understand the nature of the reporting errors that underlie the mean estimates.
- The inaugural issue of Development Engineering is now out (all issues are open access!). I’m delighted that my paper on attempting to use RFID to track small firm sales is in this first issue, along with a paper on how to randomize better in sequential randomized trials, a paper that proposes a “system [which] leverages smartphones, cellular based sensors, and cloud storage and computing to lower the entry barrier to impact evaluation”, a paper on biomass stoves, and one on rural electrification. Note also this from the editor’s introduction “we see major benefits from publishing studies that find weak or no impacts. In global development, there should be no silent failures; there is inherent value in learning from interventions that fail to achieve their intended impacts.”
I’ve been travelling the past week, and had several people contact me with questions about impact evaluation while away. I figured these might come up again, and so I’d put up the questions and answers here in case they are useful for others.
Question 1: Winsorizing – “do we do this on the whole sample, or do we do it within treatment and control, baseline and follow-up?”
Winsorizing is commonly used to deal with outliers, for example, you might set all data points above the 99th percentile equal to the 99th percentile. It is key here that you don’t use different cut-offs for treatment and control. For example, suppose you have a treatment for businesses that makes 4 percent of the treatment group grow their sales massively. If you winsorize separately at the 95th percentile of the treatment distribution for the treatment group and at the 95th percentile of the control distribution for the control groups, you might end up completely missing the treatment effect. I think it makes sense to do this with separate cutoffs by survey round to allow for seasonal effects and so you aren’t winsorizing more points from one round than another (which could be the case if you used the same global cutoffs for all rounds).
In an article in Slate yesterday, co-founders of GiveDirectly announced that they will provide at least 6,000 people in Kenya with a basic income grant (BIG) for a period of 10-15 years, which will cost about $30 million. The proposal is scant in details at the moment, but this article in Vox suggests that dozens of villages will randomly be selected in an already selected region of Kenya for this exercise and everyone within will be given roughly a dollar a day per person for a decade.