I was at a workshop last week where I was moderating a discussion of the practicalities of doing impact evaluations in conflict and post-conflict settings. One of the program-implementation folks made clear that working with the impact evaluation was a strain -- as she put it this "was pulling our field staff through a keyhole". Which got me thinking about the costs that we, as impact evaluators, can cause for a program.
- The Economist today has an article Pennies from Heaven on giving cash transfers to the poor- it discusses the recent Give Directly evaluation, Berk’s work in Malawi and overview piece on CCTs, my Ghana experiment, and more.
I recently finished teaching smart and hard working honours students. In Growth and Development, we covered equity and talked about inequalities of opportunity (and outcomes) across countries, across regions within countries, between different ethnic groups, genders, etc. In Population and Labour Economics, we covered intra-household bargaining models and how spending on children may vary depending on the relative bargaining power of the parents.
- Why control groups are ethical and necessary in the Huffington Post: “the importance of knowing whether or not new methods add to student outcomes is so great that one could argue that it is unethical not to agree to participate in experiments in which one might be assigned to the control group”
The old saw goes: when you have a hammer, everything looks like a nail. But what if the best way to fix your broken policy is actually a bolt? I was recently at a workshop where someone was presenting preliminary results of an evaluation cash transfer program which, while perhaps started with social protection kind of objectives in mind, actually seems to have had impacts on business creation and revenues that dwarfed your average business training program or microfinance program.
In August, Patrick McEwan's meta-analysis of 76 randomized controlled trials (RCTs) on student learning in developing countries came out. I thought: Finally! The following month, Krishnaratne et al. came out with another meta-analysis, this one analyzing 75 randomized and quasi-experimental studies on both enrollment and learning outcomes.
Recently both the American Economic Association and 3ie have launched Impact Evaluation Trial Registries. The basic idea in both cases is for researchers to register in advance the details of an evaluation they are planning on doing. This has a couple of main purposes:
Many key economic decisions involve implicit trade-offs over time: how much to save or invest today affects how much to spend both today and tomorrow, and individuals will differ in their preferences for satisfaction today versus delayed satisfaction tomorrow. Economists call the relative preference (or disfavor) for the present over the future a discount rate (i.e. the rate at which we discount the future for the present), and the discount rate is a core parameter in economic models of choice and behavior.