I attended this conference at Madison, WI last week, which was quite pleasant except the weather – it snowed! On the other hand, the fried cheese curds and scotch ales consumed with friends and colleagues (especially those that traveled thousands of miles to get there: that prize almost always goes to John Gibson from University of Waikato in New Zealand) in the afterhours were nice. So was meeting new people and seeing new work in progress. It could be a small N problem, but it seems to me that there are a lot more people working on two broad areas: savings instruments and gender discrimination. There was a special session on experiments in saving. There were many good papers and a smaller number of not so great ones, but two papers stuck with me more than the others. They both concern malaria in sub-Saharan Africa.
The first one is a paper by Ben Yarnoff about competing risk disease in sub-Saharan Africa. Starting from the observation that (a) 75% of child mortality is due to diarrhea, malaria, and pneumonia in SSA, (b) private spending on the prevention of these diseases is very low, and (c) private spending on treatment of these diseases ex-post is relatively high (presumably compared to HH income), he develops a model where households will increase spending on prevention on one of these diseases if the risk of one of the other diseases were to exogenously decline (otherwise, it makes more sense to take a wait and see approach and seek treatment ex-post). So, using Vitamin A distribution as a decrease in the risk of death from diarrhea, he finds that those in areas where such distribution occurred are more likely to spend money on bed nets. This is a clever idea, the kind that normally takes a while to come up with and develop. If I have one concern about the paper, it is that the effectiveness of Vitamin A to bolster the immune system in children to fend off the effects of diarrhea is not crystal clear in the relevant literature, nor is its independent effects with respect to malaria, or its possible side effects.
The second one (by Dillon, Friedman, and Serneels) examines the impact of testing for and treating malaria on the productivity of workers in sugarcane plantations in Nigeria. (Caveat: the paper is preliminary and, according to the authors, should not be cited just yet. I got oral permission to write this blurb). They run an experiment of testing for malaria and treating workers who tested positive. The subsequent productivity gains for people with malaria are large. Given the relatively low prevalence, however, they conclude that it would not be cost-effective for the plantation owners to test and treat everyone. However, it is a puzzle as to why the sick workers themselves do not seek treatment: they don't seem credit constrained and the gains are larger than the cost of testing and treatment. It seems that there are some interesting follow-up experiments or at least some subsequent other non-experimental work to solve the puzzle.
Speaking of follow-up experiments, we heard this interesting paper at our seminar series at the World Bank by Chassang, Miquel, and Snowberg, which tries to extract efficacy from effectiveness trials. In particular, the authors are interested in the effect of a particular treatment separately from the effect of effort and their interaction: think fertilizers that work better if you apply them properly or statins that lower your cholesterol more if you also eat well and exercise. They have an interesting idea about running a trial with two arms, where the probability of being treated is very small for one group and very high in the other. In a theoretical framework, where the sample size is infinity, you will have people who think they are being treated but are not, and people who think they got the placebo but got the real thing. The former will exert effort, while the latter will slack off, yielding the effects of the pill and effort separately, as well as the interaction effects. Of course, this knowledge can be important if we need to provide incentives on the effort side as well. Armed with this information, you can do follow-up trials to improve effectiveness.
The idea is similar to what Jeffrey Smith (as he told me over a nice dinner last week) and James Heckman call ‘randomization bias:’ because in a trial setting, I have a 50% chance of getting the real thing, I slack off somewhat. By having two trial arms where the probability of being treated is small (or large) enough to ensure total lack of (or full) effort, you can get around this. That’s why this is akin to a principal-agent approach to RCTs. Most people at the seminar, however, were pondering whether we could take advantage of this approach in any of our ongoing or upcoming projects: with smaller sample sizes, barring lying to the subjects, which is neither ethical nor sustainable, the power to disentangle these effects will be low.
That’s it for this week. By now, you’ve probably figured out that each of us post on a specific day of the week. You’re hopefully enjoying this new blog equally every day, but just in case you already have your favorite bloggers, you can look forward to Mondays to read David, Tuesdays for Markus, Wednesdays for Jed, and Thursdays for yours truly. We’ll try to have some guest bloggers, etc. on Fridays: if you have ideas or would like to volunteer, please feel free to write to us.
In the upcoming weeks, we’ll have a review (or perhaps two reviews) of the new Banerjee and Duflo book. I plan to write on ‘sub-group analysis’ in RCTs, the importance of measurement issues, along with some lighter material...