Syndicate content

Should you oversample compliers if budget is limited and you are concerned take-up is low?

David McKenzie's picture

My colleague Bilal Zia recently released a working paper (joint with Emmanuel Hakizimfura and Douglas Randall) that reports on an experiment conducted with 200 Savings and Credit Cooperative Associations (SACCOs) in Rwanda. The experiment aimed to test two different approaches to decentralizing financial education delivery, and finds improvements are greater when Saccos get to choose which staff should be trained rather than when they are told to send the manager, a loan officer, and a board member.

One point of the paper that I thought might be of broader interest to our readers concerns the issue of what to do when you only have enough budget to survey a sample of a program’s beneficiaries, and you are concerned about getting enough compliers.

Lessons from a cash benchmarking evaluation: Authors' version

This is a guest post by Craig McIntosh and Andrew Zeitlin.

We are grateful to have this chance to speak about our experiences with USAID's pilot of benchmarking its traditional development assistance using unconditional cash transfers. Along with the companion benchmarking study that is still in the field (that one comparing a youth workforce readiness to cash) we have spent the past two and a half years working to design these head-to-head studies, and are glad to have a chance to reflect on the process. These are complex studies with many stakeholders and lots of collective agreements over communications, and our report to USAID, released yesterday, reflects that. Here, we convey our personal impressions as researchers involved in the studies.

Weekly links September 14: stealth cash vs WASH, online job boards, income-smoothing from bridges, lowering interest rates through TA, and more...

David McKenzie's picture

Declaring and diagnosing research designs

This is a guest post by Graeme Blair, Jasper Cooper, Alex Coppock, and Macartan Humphreys

Empirical social scientists spend a lot of time trying to develop really good research designs and then trying to convince readers and reviewers that their designs really are good. We think the challenges of generating and communicating designs are made harder than they need to be because (a) there is not a common understanding of what constitutes a design and (b) there is a dearth of tools for analyzing the properties of a design.

Cash grants and poverty reduction

Berk Ozler's picture

Blattman, Fiala, and Martinez (2018), which examines the nine-year effects of a group-based cash grant program for unemployed youth to start individual enterprises in skilled trades in Northern Uganda, was released today. Those of you well versed in the topic will remember Blattman et al. (2014), which summarized the impacts from the four-year follow-up. That paper found large earnings gains and capital stock increases among those young, unemployed individuals, who formed groups, proposed to form enterprises in skilled trades, and were selected to receive the approximately $400/per person lump-sum grants (in 2008 USD using market exchange rates) on offer from the Northern Uganda Social Action Funds (NUSAF). I figured that a summary of the paper that goes into some minutiae might be helpful for those of you who will not read it carefully – despite your best intentions. I had an early look at the paper because the authors kindly sent it to me for comments.

Weekly links September 7: summer learning, wisdom from Manski, how the same data gives many different answers, and more...

David McKenzie's picture
A catch-up of some of the things that caught my attention over our break.
  • The NYTimes Upshot covers an RCT of the Illinois Wellness program, where the authors found no effect, but show that if they had used non-experimental methods, they would have concluded the program was successful.
  • Published in August, “many analysts, one data set”, highlighting how many choices are involved in even simple statistical analysis – “Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates.”
  • Video of Esther Duflo’s NBER Summer institute lecture on machine learning for empirical researchers; and of Penny Goldberg’s NBER lecture on can trade policy serve as competition policy?

Pensions and living with your kids

Markus Goldstein's picture
When a government implements a policy, there is often a question about how it will interact and/or displace existing informal practices.    For example, awhile back there was a lot of discussion around how government provided insurance would displace (or not) informal risk sharing arrangements that may have been doing a good job of protecting some people from risk. 
 

Have Descriptive Development Papers Been Crowded Out by Impact Evaluations?

David McKenzie's picture

During our August break, there was an interesting discussion on twitter after Scott Cunningham tweeted that “Seems like the focus on identification has crowded out descriptive studies, and sometimes forced what would be otherwise a good descriptive study into being a bad causal study. It's actually probably harder to write a good descriptive study these days. Stronger persuasion req.”

Others quickly pointed to the work by Piketty and Saez, and by Raj Chetty and co-authors that have used large administrative datasets in developed countries to document new facts. A few months earlier, Cyrus Samii set up a thread on descriptive quantitative papers in political science.

But the question got me thinking about recent examples of descriptive papers in development – and the question of what it takes for such papers to get published in general interest journals. Here are some examples published over the last ten years, including some very recently:

"If I can’t do an impact evaluation, what should I do?” – A Review of Gugerty and Karlan’s The Goldilocks Challenge: Right-Fit Evidence for the Social Sector

David Evans's picture

Are we doing any good? That’s what donors and organizations increasingly ask, from small nonprofits providing skills training to large organizations funding a wide array of programs. Over the past decade, I’ve worked with a wide array of governments and some non-government organizations to help them figure out if their programs are achieving their desired goals. During those discussions, we spend a lot of time drawing the distinction between impact evaluation and monitoring systems. But because my training is in impact evaluation – not monitoring – my focus tends to be on what impact evaluation can do and on what monitoring systems can’t. That sells monitoring systems short.

Mary Kay Gugerty and Dean Karlan have crafted a valuable book – The Goldilocks Challenge: Right-Fit Evidence for the Social Sector – that rigorously lays out the power of monitoring systems to help organizations achieve their goals. This is crucial. Not every program will or even should have an impact evaluation. But virtually every program has a monitoring system – of one form of another – and good monitoring systems help organizations to do better. As Gugerty and Karlan put it, “the trend to measure impact has brought with it a proliferation of poor methods of doing so, resulting in organizations wasting huge amounts of money on bad ‘impact evaluations.’ Meanwhile, many organizations are neglecting the basics. They do not know if staff are showing up, if their services are being delivered, if beneficiaries are using services, or what they think about those services. In some cases, they do not even know whether their programs have realistic goals and make logical sense.”
 

Pages