Syndicate content

August Occasional Links 3: poverty mapping redux, hassles vs prices, the poor and banks, and more…

David McKenzie's picture
  • A new paper in Science combines machine learning, nightlights, high-resolution daytime satellite images, and household surveys to map poverty in Africa. Marshall Burke (one of the authors) summarizes in this blog post: “First, we use lower-resolution nightlights images to train a deep learning model to identify features in the higher-resolution daytime imagery are predictive of economic activity. The idea here … is that nightlights are a good but imperfect measure of economic activity, and they are available for everywhere on earth. So the nightlights help the model figure out what features in the daytime imagery are predictive of economic activity.  Without being told what to look for, the model is able to identify a number of features in the daytime imagery that look like things we recognize and tend to think are important in economic activity (e.g roads, urban areas, farmland, and waterways…). Then in the last step of the process, we use these features in the daytime imagery to predict village-level wealth, as measured in a few household surveys that were publicly available and geo-referenced”. Over at the CGD blog, Justin Sandefur offers a nice commentary and critique.
  • Also in Science, Dupas, Hoffman, Kremer and Zwane compare the relative effectiveness of prices and hassle/time costs in screening health product delivery so that only those who will use them take them. They find requiring people to show up and redeem a monthly voucher reduces the amount of chlorine given away by 60%, but with only a 1% drop in usage
  • Jason Kerwin on work by Dupas, Robinson, Karlan and Ubfal on introducing savings accounts to the poor in three countries, finding very low take-up  - I like his summary “Unfortunately, like many other silver bullets before it, this one has failed to kill the stalking werewolf of poverty. Indeed, it almost doesn’t leave the barrel of the gun. 60% of the treatment group in Malawi and Uganda (and 94% in Chile) never touch the bank accounts.”
  • USAID has a post on my RFID technology flop, published in Development Engineering.

And finally, XKCD on linear regressions not to trust
 

August Occasional Links 2: Has IE peaked? Unusual seeding of random selection, unequal Egypt, and more…

David McKenzie's picture

August occasional links 1: gender, education accountability, conferences, and more…

David McKenzie's picture

Weekly links July 29: the political economy of running a RCT, the peer review trade-off, work with me, and more…

David McKenzie's picture
  • A couple of months ago I attended this very interesting conference by the Innovation Growth Lab run by Nesta. I was in a session with Mark Sayers from the UK’s Department for Business, Energy and Industrial Strategy, which has been running an RCT on growth vouchers for 20,000 firms in the UK. He gave a talk on lessons learned from a policy side in engaging in such a trial – and I found it very interesting to hear the political economy side (Treasury only agreed to release the funding for a program they were somewhat skeptical of if it would be evaluated by an RCT). A video of his short talk is now up.
  • Slate piece on how journalists should cover working papers (based on the recent Fryer paper on racial bias in the use of lethal force). h/t Berk, who is reminded of his classic post on working papers not working.

Making Disaster Relief More Like Funeral Societies: A Review of Dercon and Clarke’s Dull Disasters

David McKenzie's picture

I was recently at the Novafrica conference in Lisbon, where one of the keynote talks was given by Stefan Dercon. He based it around a newly released short book he has written with Daniel Clarke, called Dull Disasters (open access version). The title is meant to indicate both the aim to make dealing with disasters a dull event rather than media circus, as well as to discuss ways to ‘dull’ or reduce the impact of disasters.
Stefan started his talk by noting that disaster relief may well be the part of the whole international development and humanitarian system that is the least efficient and has had the least research on it. The book starts by noting the predictability of responses “every time a natural disaster hits any part of the world, the newspaper headlines ten days later can be written in advance: ‘why isn’t the response more coordinated?’. He gives the examples of the responses to the earthquakes in Nepal and Haiti, to Hurricane Katrina, and to Ebola as examples. But he then notes the crux of the problem “…The truth is everybody argues for coordination but nobody likes to be coordinated”.

Using Case Studies to Explore and Explain Complex Interventions

Michael Woolcock's picture
One of the most cited of Martin Ravallion’s many papers implores researchers to “look beyond averages” if they want to better understand development processes. One fruitful area in which this might happen is the assessment of complex interventions, a defining characteristic of which is that they generate wide variation in outcomes.

Politics and Governance: calling for evaluation of “atypical” interventions: Guest Blog by Stuti Khemani

A “meta” problem facing not only impact evaluation work but all development policy dialogue is perverse behavior in the public sector to not pursue evidence-based, technically sound policies. Politics and governance come between statistically significant research results and real impact in the world. We confront these problems in a policy research report that has been described as having transformational implications for the business of international development assistance. And we derive implications for a research agenda that involves atypical impact evaluations that would complement work on how to fix the pipes with work on how to fix the institutions that would fix the pipes.

Weekly Links, July 15 -- U.S. Edition: what does police bias have to do with colliders, questions on POTUS publishing in JAMA.

Berk Ozler's picture
  • What’s JAMA’s new impact factor now that POTUS has published a paper there? As you probably heard, Mr. Obama published a paper in the Journal of the American Medical Association this week, describing the progress to date of the US Health Care Reform and outlining the next steps. I have so many questions: was the review process (if there was one) double blind? Was he first rejected from NEJM? Was there a revise and resubmit? Was Obama totally nice to that rude referee #2, so that his paper could get published without further hassle? If you’re a handling editor or a referee, we want to hear from you (anonymously or not)...

Pages