Allow me to take the occasion of the 236th “birthday” of my native-born country (celebrated on July 4th here in the U.S.) to go far afield and discuss a topic that, while grounded in empirical social science, doesn’t touch directly on impact evaluation. The topic is how the personality traits of an individual may be related to his or her relative wealth.
I am in the midst of a trip working on impact evaluations in Ghana and Tanzania and these have really brought home the potential and pitfalls of working with program’s monitoring data.
In many evaluations, the promise is significant. In some cases, you can even do the whole impact evaluation with program monitoring data (for example when a specific intervention is tried out with a subset of a program’s clients). However, in most cases a combination of monitoring and survey data is required.
Alan Gerber and Don Green, political scientists at Yale and Columbia respectively, and authors of a large number of voting experiments, have a new textbook out titled Field Experiments: Design, Analysis, and Interpretation. This is noteworthy because despite the massive growth in field experiments, to date there hasn’t been an accessible and modern textbook for social scientists looking to work in, or better understand, this area. The new book is very good, and I definitely recommend anyone working in this area to read at least key chapters.
I’ve been working for the last couple of years with Tara Vishwanath, Nandini Krishnan and Matt Groh on a pilot program in Jordan which aims to get young women just graduating from community college into work. Today I want to describe what we did, and ask you to predict the results – which I will then share in a subsequent blog post.
This blog has previously explored the (somewhat rare) involvement of other social science disciplines in development economics research. Now a new book helps move the ball down the field a bit more. Entitled Children and Youth in Crisis, and edited by Mattias Lundberg and Alice Wuermli, the book combines various disciplinary perspectives on the impacts of economic shocks on human development.
Imagine you are running the recruitment process for a government agency and you are trying to attract high quality, public service oriented staff to work in difficult agencies. How should you do this? If you offer higher wages, maybe it will get you higher quality folks, but will you lose public service motivation? And how do you get these high quality folks to go to remote and dangerous areas?
June 30 marks the end of the fiscal year at the World Bank, and an annual reminder of the stark irony of working in a bank that does not let you save – money is allocated to a particular fiscal year, and if not spent during this time, disappears into a vortex where it is reallocated elsewhere in the institution. This is a problem that is not unique to the World Bank - last week’s Science news had an article reporting on the findings of a blue-ribbon panel of business leaders, u
· In case you missed, the IDB authors of the one laptop per child evaluation post a response to Berk’s post on the IDB Development that works blog. They discuss the context in which their evaluation was done, and the possible government rationale for investing in OLPC in Peru.
While some of us get to conduct individually randomized trials, I’d say that cluster randomized trials are pretty much the norm in field experiments in economics. Add to that the increase in the level of ambition we recently acquired to have interventions with multiple treatment arms (rather than one treatment and one control group) and mix it with a pinch of logistical and budgetary constraints, we have a non-negligible number of trials with small numbers of clusters (schools, clinics, villages, etc.).