The “science of delivery”, a relatively new term among development practitioners, refers to the focused study of the processes, contexts, and general determinants of the delivery of public services and goods. Or to paraphrase my colleague Adam Wagstaff, the term represents a broadening of inquiry towards an understanding of the “how to deliver” and not simply a focus on the “what to deliver”.
We promised some time ago to review the recent working paper by Pritchett and Sandefur on external validity, and the title of this post is the main take-away for me: my name is Berk Özler and I agree with this specific message. However, while I’d like to say that there is much more here, I am afraid that I, personally, did not find more to write home about...
I guess people were more focused on scaring up candy than writing this week, but a few interesting links:
I was at a workshop last week where I was moderating a discussion of the practicalities of doing impact evaluations in conflict and post-conflict settings. One of the program-implementation folks made clear that working with the impact evaluation was a strain -- as she put it this "was pulling our field staff through a keyhole". Which got me thinking about the costs that we, as impact evaluators, can cause for a program.
- The Economist today has an article Pennies from Heaven on giving cash transfers to the poor- it discusses the recent Give Directly evaluation, Berk’s work in Malawi and overview piece on CCTs, my Ghana experiment, and more.
I recently finished teaching smart and hard working honours students. In Growth and Development, we covered equity and talked about inequalities of opportunity (and outcomes) across countries, across regions within countries, between different ethnic groups, genders, etc. In Population and Labour Economics, we covered intra-household bargaining models and how spending on children may vary depending on the relative bargaining power of the parents.
- Why control groups are ethical and necessary in the Huffington Post: “the importance of knowing whether or not new methods add to student outcomes is so great that one could argue that it is unethical not to agree to participate in experiments in which one might be assigned to the control group”
The old saw goes: when you have a hammer, everything looks like a nail. But what if the best way to fix your broken policy is actually a bolt? I was recently at a workshop where someone was presenting preliminary results of an evaluation cash transfer program which, while perhaps started with social protection kind of objectives in mind, actually seems to have had impacts on business creation and revenues that dwarfed your average business training program or microfinance program.
In August, Patrick McEwan's meta-analysis of 76 randomized controlled trials (RCTs) on student learning in developing countries came out. I thought: Finally! The following month, Krishnaratne et al. came out with another meta-analysis, this one analyzing 75 randomized and quasi-experimental studies on both enrollment and learning outcomes.