Syndicate content

Cash grants and poverty reduction

Berk Ozler's picture

Blattman, Fiala, and Martinez (2018), which examines the nine-year effects of a group-based cash grant program for unemployed youth to start individual enterprises in skilled trades in Northern Uganda, was released today. Those of you well versed in the topic will remember Blattman et al. (2014), which summarized the impacts from the four-year follow-up. That paper found large earnings gains and capital stock increases among those young, unemployed individuals, who formed groups, proposed to form enterprises in skilled trades, and were selected to receive the approximately $400/per person lump-sum grants (in 2008 USD using market exchange rates) on offer from the Northern Uganda Social Action Funds (NUSAF). I figured that a summary of the paper that goes into some minutiae might be helpful for those of you who will not read it carefully – despite your best intentions. I had an early look at the paper because the authors kindly sent it to me for comments.

Weekly links September 7: summer learning, wisdom from Manski, how the same data gives many different answers, and more...

David McKenzie's picture
A catch-up of some of the things that caught my attention over our break.
  • The NYTimes Upshot covers an RCT of the Illinois Wellness program, where the authors found no effect, but show that if they had used non-experimental methods, they would have concluded the program was successful.
  • Published in August, “many analysts, one data set”, highlighting how many choices are involved in even simple statistical analysis – “Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship. Overall, the 29 different analyses used 21 unique combinations of covariates.”
  • Video of Esther Duflo’s NBER Summer institute lecture on machine learning for empirical researchers; and of Penny Goldberg’s NBER lecture on can trade policy serve as competition policy?

Pensions and living with your kids

Markus Goldstein's picture
When a government implements a policy, there is often a question about how it will interact and/or displace existing informal practices.    For example, awhile back there was a lot of discussion around how government provided insurance would displace (or not) informal risk sharing arrangements that may have been doing a good job of protecting some people from risk. 
 

Have Descriptive Development Papers Been Crowded Out by Impact Evaluations?

David McKenzie's picture

During our August break, there was an interesting discussion on twitter after Scott Cunningham tweeted that “Seems like the focus on identification has crowded out descriptive studies, and sometimes forced what would be otherwise a good descriptive study into being a bad causal study. It's actually probably harder to write a good descriptive study these days. Stronger persuasion req.”

Others quickly pointed to the work by Piketty and Saez, and by Raj Chetty and co-authors that have used large administrative datasets in developed countries to document new facts. A few months earlier, Cyrus Samii set up a thread on descriptive quantitative papers in political science.

But the question got me thinking about recent examples of descriptive papers in development – and the question of what it takes for such papers to get published in general interest journals. Here are some examples published over the last ten years, including some very recently:

"If I can’t do an impact evaluation, what should I do?” – A Review of Gugerty and Karlan’s The Goldilocks Challenge: Right-Fit Evidence for the Social Sector

David Evans's picture

Are we doing any good? That’s what donors and organizations increasingly ask, from small nonprofits providing skills training to large organizations funding a wide array of programs. Over the past decade, I’ve worked with a wide array of governments and some non-government organizations to help them figure out if their programs are achieving their desired goals. During those discussions, we spend a lot of time drawing the distinction between impact evaluation and monitoring systems. But because my training is in impact evaluation – not monitoring – my focus tends to be on what impact evaluation can do and on what monitoring systems can’t. That sells monitoring systems short.

Mary Kay Gugerty and Dean Karlan have crafted a valuable book – The Goldilocks Challenge: Right-Fit Evidence for the Social Sector – that rigorously lays out the power of monitoring systems to help organizations achieve their goals. This is crucial. Not every program will or even should have an impact evaluation. But virtually every program has a monitoring system – of one form of another – and good monitoring systems help organizations to do better. As Gugerty and Karlan put it, “the trend to measure impact has brought with it a proliferation of poor methods of doing so, resulting in organizations wasting huge amounts of money on bad ‘impact evaluations.’ Meanwhile, many organizations are neglecting the basics. They do not know if staff are showing up, if their services are being delivered, if beneficiaries are using services, or what they think about those services. In some cases, they do not even know whether their programs have realistic goals and make logical sense.”
 

Weekly links July 27: Advances in RD, better measurement, lowering prices for poop removal, and more...

David McKenzie's picture
  • Matias Cattaneo and co-authors have a draft manuscript on “a practical guide to regression discontinuity designs: volume II”. This includes discussion of a lot of practical issues that can arise, such as dealing with discrete values of the running variable, multiple running variables, and geographic RDs. Stata and R code are provided throughout.
  • Great Planet Money podcast on the Poop Cartel – work Molly Lipscomb and co-authors are doing to lower prices for emptying toilets in Senegal.
  • A paper on how to improve reproducible workflow – provides an overview of different tools for different statistical software packages, as well as advice on taskflow management, naming conventions, etc.
  • J-PAL guide on measuring female empowerment
  • Reviewing a paper that you have already reviewed before? This tweet by Tatyana Deryugina offers a good suggestion of using a pdf comparison tool (she suggests draftable) to compare pdfs to see what has changed

Pages