Syndicate content

Weekly links October 28: the platinum development intervention, super long-run cash effects, in praise of uncivil discussion, and more…

David McKenzie's picture
  • The platinum development intervention: Lant Pritchett on how the gold standard ultra-poor poverty programs don’t stack up very well against migration.
  • Cash effects after 40 years: The long-term impacts of cash transfers in the U.S. – Wonkblog covers a new working paper (and job market paper from a Stanford student David Price) on the income maintenance experiments  that took place four decades ago – they find those who received the assistance retired earlier, as a result making less money over their careers – while there appears to be no long-term impacts on children (for what they can measure using admin data).

It’s that Time of the Year: Submissions now open for our annual “Blog your job market paper” series

David McKenzie's picture

We are pleased to launch for the sixth year a call for PhD students on the job market to blog their job market paper on the Development Impact blog.  We welcome blog posts on anything related to empirical development work, impact evaluation, or measurement. For examples, you can see posts from 2015, 20142013 and 2012. We will follow a similar process as previous years, which is as follows:

We will start accepting submissions immediately until midnight EST on Tuesday, November 22, with the goal of publishing a couple before Thanksgiving and then about 6-8 more in December when people are deciding who to interview. We will not accept any submissions after the deadline (no exceptions). As with last year, we will do some refereeing to decide which to include on the basis of interest, how well written they are, and fit with the blog. Your chances of being accepted are likely to be somewhat higher if you submit earlier rather than waiting until the absolute last minute of our deadline.

More replication in economics?

Markus Goldstein's picture
About a year ago, I blogged on a paper that had tried to replicate results on 61 papers in economics and found that in 51% of the cases, they couldn’t get the same result.   In the meantime, someone brought to my attention a paper that takes a wider sample and also makes us think about what “replication” is, so I thought it would be worth looking at those results.  

Lessons from some of my evaluation failures: Part 1 of ?

David McKenzie's picture

We’ve yet to receive much in the way of submissions to our learning from failure series, so I thought I’d share some of my trials and tribulations, and what I’ve learnt along the way. Some of this comes back to how much you need to sweat the small stuff versus delegate and preserve your time for bigger picture thinking (which I discussed in this post on whether IE is O-ring or knowledge hierarchy production). But this presumes you have a choice on what you do yourself, when often in dealing with governments and multiple layers of bureaucracy, the problem is your potential for micro-management can be less in the first place. Here are a few, and I can share more in other posts.

Weekly links October 21: Deaton on doing research, flexible work schedules, 17,000 minimum wage changes, and more…

David McKenzie's picture
  • Three questions with Angus Deaton – why diversity in researchers is good and directed research can be bad “Everyone of us has a different upbringing. Many people in economics now come from many countries around the world, they have different political views and political backgrounds. There’s a whole different social culture, and so on. I think economics in the United States has changed immeasurably in the last 30 years and been enormously enriched by that diversity with people coming from all over the world. That will only work if people bring with them the stuff they had when they were children or the stuff they did in college, the passions they had early on. Either smash them to pieces in the face of the data and see your professors like me telling them to do something else or turn them into something really valuable. So, don’t lose your unique value contributions. Stick to what is really important to you and try to research that” (h/t Berk).
  • Chris Blattman on the hidden price of risky research, particularly for women.

What’s new in education research? Impact evaluations and measurement – October round-up

David Evans's picture

Here is a curated round-up of recent research on education in low- and middle-income countries, with a few findings from high-income countries that I found relevant. All are from the course of 2016.

If I’m missing recent articles that you’ve found useful, please add them in the comments!

Did Peru’s CCT program halve its stunting rate?

Berk Ozler's picture

On September 30, the Guardian ran several articles (see here, here, and an editorial here) linking the halving of Peru’s stunting rate (from 28 to 14% between mid-2000s and 2015) to its CCT program Juntos. Of course, it is great to hear that the share of stunted children in Peru declined dramatically over a short period. However, as I know that while CCT programs (conditional or not) have been successful in improving various outcomes including child health, the effect sizes are never this dramatic, I was curious to see whether the decline was part of a secular trend in Peru or actually could be attributed primarily to Juntos

Weekly links October 14: fake doctors and dubious health claims, real profits, refugees, reweighting, and more…

David McKenzie's picture
  • In Science this week: what refugees do Europeans want? A “conjoint experiment” with 180,000 Europeans finds they want high-skilled, young, fluent in the local language, who are persecuted and non-Muslim (5 page paper, 121 page appendix!). This involved showing pairs of refugees with randomly assigned characteristics and having them say whether they supported admitting the refugee, and if they could only choose one out of the pair, which one.
  • BBC News covers the recent science paper by Jishnu Das and co-authors on training ‘fake doctors’ in India (or for more study details, see the MIT press release which has a great photo-bomb)

When the Juice Isn’t Worth the Squeeze: NGOs refuse to participate in a beneficiary feedback experiment

Guest post by Dean Karlan and Jacob Appel
Dean has failed again! Dean and Jacob are kicking off our series on learning from failure by contributing a case that wasn’t in the book.
I. Background + Motivation
Recent changes in the aid landscape have allowed donors to support small, nimble organizations that can identify and address local needs. However, many have lamented the difficulties of monitoring the effectiveness of local organizations. At the same time as donors become more involved, the World Bank has called for greater “beneficiary control,” or more direct input from people receiving development services.
While attempts have been made to increase the accountability of non-profits, little research addresses whether doing so actually encourages donors to give more or to continue supporting the same projects. On the contrary it may be that lack of accountability provides donors with a convenient excuse for not giving. It could be that donors give the same amount even with greater accountability. Furthermore, little research indicates whether increased transparency and accountability would provide incentives for organizations to be more effective in providing services. Rigorous research will help determine the impact of increasing accountability, both on the behavior of donors and on the behavior of organizations working in the field.  

Book Review: Failing in the Field – Karlan and Appel on what we can learn from things going wrong

David McKenzie's picture

Dean Karlan and Jacob Appel have a new book out called Failing in the Field: What we can learn when field research goes wrong. It is intended to highlight research failures and what we can learn from them, sharing stories that otherwise might otherwise be told only over a drink at the end of a conference, if at all. It draws on a number of Dean’s own studies, as well as those of several other researchers who have shared stories and lessons. The book is a good short read (I finished it in an hour), and definitely worth the time for anyone involved in collecting field data or running an experiment.