When I was in second grade, I was in a Catholic school, and we had to buy the pencils and pens that we used at school from a supply closet. One day I felt like getting new pencils, so I stood in line when the supply closet was open and asked for two. Before reaching for the pencils, the person who operated the supply closet, Sister Evangelista, told me a story about her time volunteering in Haiti, how the children she taught there used to scramble about in garbage heaps looking for discarded pieces of wood, charcoal, and wire so that they could make their own pencils. I left the closet that day without any pencils and with a permanent sense of guilt when buying new school supplies.
I now feel the same way about baseline data. Most of the variables I have ever collected – maybe even 80 percent – sit unused, while only a small minority make it to any tables or graphs. Given the length of most surveys in low- and middle-income countries, I suspect that I am not alone in this. I know that baselines can be useful for evaluations and beyond (see this blog by David McKenzie on whether balance tests are necessary for evaluations and this one by Dave Evans for suggestions and examples of how baseline data can be better used). But do we really need to spend so much time and resources on them?
- Don't be afraid, we're just hiring: DIME is looking for a field coordinator based in Peru, and two research assistants based in Washing (position one and two).
- Was that a whisper I heard? Over at the CGD Blog, Sarah Rose goes hunting for signs of the use of evidence in RFPs from a large aid agency.
- Just don't look under the bed: On Goats and Soda, a nice piece on Banerjee et. al's work on using postcards to reduce leakage from a huge social program in Indonesia.
- And should you really be afraid because it's the thirteenth? Livescience debunks the odds that you'll be in the car wreck (British humor strikes again) and National Geographic explains why you need to leave the house...now. So stop your triskaidekaphobia before you hurt yourself. .
- Dan Kopf at Quartz has a nice summary piece on “the transformative power of giving young women cash” – covering work in Malawi (by Berk and co-authors), in South Africa, in Bangladesh, and a forthcoming WBRO paper on cash transfers and intimate partner violence that overviews 14 studies “eleven of these studies found a reduction in domestic violence, two found no change, and only one found an increase”.
- Scott Guggenheim offers his response to/critique of the recent 3ie evaluation of CDD programs on the From Poverty to Power blog.
- In a new Finance & PSD impact note, Miriam Bruhn, Rekha Reddy and Claudia Ruiz summarize work they have done in Mexico that uses matched diff-in-diff to evaluate the impact of offering technical assistance to rural banks - “technical assistance allowed rural credit unions to increase their operating efficiency and reduce their non-performing loans ratio. Part of these gains translated into higher returns for the credit unions, but they were also passed on to the final borrowers in the form of more credit at lower lending interest rates.” We are now up to 50 of these 2-page summaries of finance and private sector impact evaluations.
If one of our children is skipping school without our approval and if we have not excused him or her before, my wife and I quickly receive a text message (see screenshot below), an email and a phone call from the school district. A serious discussion in the evening will ensue.
The New York Times recently had a piece on the retraction and re-issuance of a study in Spain based on a randomized trial of the Mediterranean Diet’s effect on heart disease. The original study was meant to be an individualized random assignment of 7,447 people aged 55 to 80 to one of three different diets – a control diet (advice to just reduce fat content), or two variants of the Mediterranean Diet (in which they were given free olive oil or free nuts). The study was originally published in the New England Journal of Medicine (NEJM) in 2013. The authors then appear to have been surprised to find their study on a list of suspicious trials. There are several parts to this story I thought would be of interest for doing impact evaluations in development, which I discuss below.
- In the Harvard Business Review, Blumenstock, Callen and Ghani summarize their work on using nudges to get government employees to save using mobile money in Afghanistan – “Over six months, the average employee who was enrolled to save by default accumulated an extra half-month’s salary in his or her savings account, relative to employees who had to opt in”
- An intro to R for Stata users
- The promise and perils of listening to parents – Sharon Wolf on ongoing efforts in Ghana to improve pre-school quality, and how trying to bring parents onboard backfired.
- In the Journal of Development Effectiveness, Sabet and Brown track the continued growth of development impact evaluations: “Though we find early evidence of a plateau in the growth rate of development impact evaluations, the number of studies published between January 2010 and September 2015 account for almost two thirds of the total evidence base”. Lots of other interesting facts, including 45% of all impact evaluations occurred in just 10 countries, with Kenya and Uganda having the most impact evaluations per million population, and Sub-Saharan Africa the most commonly represented region – perhaps something for donors to think about...