My latest working paper (joint with Sarojini Hirschleifer, Rita Almeida and Cristobal Ridao-Cano) presents results from an impact evaluation of a large-scale vocational training program for the unemployed in Turkey. I thought I’d briefly summarize the study, and then discuss a few aspects that may be of more general interest.
In his post this week on ethical validity in research, Martin Ravallion writes:
“Scaled-up programs almost never use randomized assignment so the RCT has a different assignment mechanism, and this may be contested ethically even when the full program is fine.”
Yesterday, Martin Ravallion wrote a piece titled ‘Taking Ethical Validity Seriously.’ It focused on ethically contestable evaluations and used RCTs as the main (only?) example of such evaluations. It is a good piece: researchers can always benefit from questioning themselves and their work in different ways.
More thought has been given to the validity of the conclusions drawn from development impact evaluations than to the ethical validity of how the evaluations were done. This is not an issue for all evaluations. Sometimes an impact evaluation is built into an existing program such that nothing changes about how the program works. The evaluation takes as given the way the program assigns its benefits. So if the program is deemed to be ethically acceptable then this can be presumed to also hold for the method of evaluation.
- Impact evaluation: a woman’s best friend? Marcelo Guigale and Markus discuss in the Huffington Post how impact evaluations can help progress towards gender equity, summarizing a variety of different studies on what works and what doesn’t to help women.
As I procrastinate writing this post, it seems only fitting to take a look at a paper that takes a look at different commitment devices.
Carrying out evaluations to affect policy is the big motivation of many development economists. Usually, grant proposals and such will ask researchers to document “How will your results affect policy?”. In this post, we address a corollary of that problem statement: “when and how should your results affect policy?”. All the work that goes into the evaluation design at the start drums up a lot of enthusiasm among policymakers, and may open windows of opportunity for policy influence long before the final results from the evaluation are available.
- On the CGD blog, Jessica Goldberg corrects the weird NYT post by Casey Mulligan critiquing experiments
- Free online course on using randomized experiments to evaluate social programs to be offered by J-PAL: this is a 4 week course starting April 1st.