In August, Patrick McEwan's meta-analysis of 76 randomized controlled trials (RCTs) on student learning in developing countries came out. I thought: Finally! The following month, Krishnaratne et al. came out with another meta-analysis, this one analyzing 75 randomized and quasi-experimental studies on both enrollment and learning outcomes.
Recently both the American Economic Association and 3ie have launched Impact Evaluation Trial Registries. The basic idea in both cases is for researchers to register in advance the details of an evaluation they are planning on doing. This has a couple of main purposes:
Many key economic decisions involve implicit trade-offs over time: how much to save or invest today affects how much to spend both today and tomorrow, and individuals will differ in their preferences for satisfaction today versus delayed satisfaction tomorrow. Economists call the relative preference (or disfavor) for the present over the future a discount rate (i.e. the rate at which we discount the future for the present), and the discount rate is a core parameter in economic models of choice and behavior.
Guest Post by Paul Niehaus
GiveDirectly got started when some grad school friends and I decided we wanted to give our money – mostly hypothetical future money, at that point – to the poor.
It is often the case that poor people do not fully access the public services due to them. Information-based interventions have been proposed as a response. The premise is that lack of information is a decisive demand-side factor inhibiting successful participatory action by poor people to get the services to which they are entitled.
There has been a lot of recent debate and discussion about the role of cash grants in aid, and whether aid is more effective when simply given as unrestricted cash compared to approaches such as conditional transfers which try to restrict how recipients use any money received. Traditionally this debate has centered around food aid and education funding, but more recently this discussion has also arisen with respect to funding small businesses.
A common critique of many impact evaluations, including those using both experimental and quasi-experimental methods, is that of external validity – how well do findings from one setting export to another? This is especially the case for studies done on relatively small samples, although as I have ranted before, there appears to be a double standard in this critique when compared to both other disciplines in economics and to other development literature.