Syndicate content

Blogs

Weekly links May 4: has your study been abducted? Noisy risk preferences, why we should be cautious extrapolating climate change and growth estimates, and more...

David McKenzie's picture
  • Excellent tradetalks podcast with Dave Donaldson has a detailed discussion with him on his work looking at the impact of railroads on development in India and in U.S. economic history.
  • The latest Journal of Economic Perspectives includes:
    •  Acemoglu provides a summary of Donaldson’s work that led to him receiving the Bates Clark medal
    • Several papers on risk preferences, including discussion of whether risk preferences are stable and how to think about them if they are not (interesting sidenote in this is a comment on how much measurement error there is when using incentivized lotteries – the correlations between risk premia measured for the same individual using different experimental choices can be quite low, and correlations tend to be higher for survey measures – and speculation that the measurement error may be worse in developing countries “large share of the papers that document contradictory effects of violent conflict or natural disasters use experimental data from developing countries, but these tools were typically developed in the context of high-income countries. They may be more likely to produce noisy results in samples that are less educated, partly illiterate, or less used to abstract thinking)
    • a series of papers on how much the U.S. gains from trade

Maybe Money does Grow on Trees

Arianna Legovini's picture
Environmental degradation puts livelihoods at risk and the Ghanaian government is determined to fight it. Planting trees is one approach to address soil erosion, topsoil quality and overgrowth of weeds and grass that lead to wildfire. This is why the World Bank’s Sustainable Land and Water Management Project (SLWMP) offers free seedlings to farmers to plant trees at a cost of about $100 per farmer. The question researchers asked at the time of project design was, would free seedlings be enough?

Rethinking identification under the Bartik Shift-Share Instrument

David McKenzie's picture
While it has been said that “friends don’t let friends use IV”, one exception has been the Bartik or shift-share instrument. Development economists tend to see these instruments used most in the trade and migration literatures, with Jaeger et al. (2018) noting that “it is difficult to overstate the importance of this instrument for research on immigration.

Weekly Links April 27: improving water conservation, acceptance rates drop below 3%, using pre-analysis plans for observational data, and more...

David McKenzie's picture

Pitfalls of Patient Satisfaction Surveys and How to Avoid Them

David Evans's picture

A child has a fever. Her father rushes to his community’s clinic, his daughter in his arms. He waits. A nurse asks him questions and examines his child. She gives him advice and perhaps a prescription to get filled at a pharmacy. He leaves.

How do we measure the quality of care that this father and his daughter received? There are many ingredients: Was the clinic open? Was a nurse present? Was the patient attended to swiftly? Did the nurse know what she was talking about? Did she have access to needed equipment and supplies?

Both health systems and researchers have made efforts to measure the quality of each of these ingredients, with a range of tools. Interviewers pose hypothetical situations to doctors and nurses to test their knowledge. Inspectors examine the cleanliness and organization of the facility, or they make surprise visits to measure health worker attendance. Actors posing as patients test both the knowledge and the effort of health workers.

But – you might say – that all seems quite costly (it is) and complicated (it is). Why not just ask the patients about their experience? Enter the “patient satisfaction survey,” which goes back at least to the 1980s in a clearly recognizable form. (I’m sure someone has been asking about patient satisfaction in some form for as long as there have been medical providers.) Patient satisfaction surveys have pros and cons. On the pro side, health care is a service, and a better delivered service should result in higher patient satisfaction. If this is true, then patient satisfaction could be a useful summary measure, capturing an array of elements of the service – were you treated with respect? did you have to wait too long? On the con side, patients may not be able to gauge key elements of the service (is the health professional giving good advice?), or they may value services that are not medically recommended (just give me a shot, nurse!).

Two recently published studies in Nigeria provide evidence that both gives pause to our use of patient satisfaction surveys and points to better ways forward. Here is what we’ve learned:

Evidence-based or interpretation-based?

Berk Ozler's picture

When people say “evidence-based policymaking” or they talk about the “credibility revolution, they are surely trying to talk about the fact that (a) we have (or trying hard to have) better evidence on impacts of various approaches to solve problems, and (b) we should use that evidence to make better decisions regarding policy and program design. However, the debate about the Haushofer and Shapiro (2018) paper on the three-year effects of GiveDirectly cash transfers in Kenya taught me that how people interpret the evidence is as important as the underlying evidence. The GiveDirectly blog (that I discussed here, and GiveDirectly posted an update here) and Justin Sandefur’s recent post on the CGD blog are two good examples.

GiveDirectly Three-Year Impacts, Explained by the Authors

This is a guest post by Johannes Haushofer and Jeremy Shapiro

[Update: 11:00 AM on 4/23/2018. Upon the request of the guest bloggers, this post has been updated to include GiveDirectly's updated blog post, published on their website on 4/20/2018, within the text of their post rather than within Özler’s response that follows.]

We’re glad our paper evaluating the long-term impacts of cash transfers has been discussed by GiveDirectly (the source of the transfers) itself and Berk Özler at the World Bank, among others (GiveDirectly has since updated their take on our paper). Given the different perspectives put forth, we wanted to share a few clarifications and our view of the big picture implications.

Weekly links April 20: Swifter justice, swifter coding, better ethics, cash transfers, and more

David Evans's picture
 
  • From the DIME Analytics Weekly newsletter (which I recommend subscribing to): applyCodebook – One of the biggest time-wasters for research assistants is typing "rename", "recode", "label var", and so on to get a dataset in shape. Even worse is reading through it all later and figuring out what's been done. Freshly released on the World Bank Stata GitHub thanks to the DIME Analytics team is applyCodebook, a utility that reads an .xlsx "codebook" file and applies all the renames, recodes, variable labels, and value labels you need in one go. It takes one line in Stata to use, and all the edits are reviewable variable-by-variable in Excel. If you haven't visited the GitHub repo before, don't forget to browse all the utilities on offer and feel free to fork and submit your own on the dev branch. Happy coding! 

  • Is it possible to speed up a justice system? On the Let's Talk Development blog, Kondylis and Corthay document a reform in Senegal that gave judges tools to speed up decisions, to positive effect. The evaluation then led to further legal reform.  

  • "Reviewing thousands of evaluation studies over the years has also given us a profound appreciation of how challenging it is to find interventions...that produce a real improvement in people’s lives." Over at Straight Talk on Evidence, the team highlights the challenge of finding impacts at scale, nodding to Rossi's iron law of evaluation ("The expected value of any net impact assessment of any large scale social program is zero") and the "stainless steel law of evaluation" ("the more technically rigorous the net impact assessment, the more likely are its results to be zero – or no effect"). They give evidence across fields – business, medicine, education, and training. They offer a proposed solution in another post, and Chris Blattman offers a critique in a Twitter thread.  

  • Kate Cronin-Furman and Milli Lake discuss ethical issues in doing fieldwork in fragile and violent conflicts

  • "What’s the latest research on the quality of governance?" Dan Rogger gives a quick round-up of research presented at a recent conference at Stanford University.  

  • In public procurement, lower transaction costs aren't always better. Over at VoxDev, Ferenc Szucs writes about what procurement records in Hungary teach about open auctions versus discretion. In short, discretion means lower transaction costs, more corruption, higher prices, and inefficient allocation. 

  • Justin Sandefur seeks to give a non-technical explanation of the recent discussion of longer term benefits of cash transfers in Kenya (1. Cash transfers cure poverty. 2. Side effects vary. 3. Symptoms may return when treatment stops.) This is at least partially in response to Berk Özler's dual posts, here and here. Özler adds some additional discussion in this Twitter thread.  

What are we learning about the impacts of public works programs on employment and violence? Early findings from ongoing evaluations in fragile states

Eric Mvukiyehe's picture

Labor-intensive public works (LIPW) programs are a popular policy intended to provide temporary employment opportunities to vulnerable populations through work-intensive projects, such as the development and maintenance of local infrastructure, that do not require special skills. For a review of LIPW programs (design, evidence and implementation), see Subbarao et al. here. In fragile states, LIPW programs are also presumed to contribute to social and political stability. The developed infrastructure allows for the implementation of other development and peacekeeping activities, while employment opportunities may help prevent at-risk youth from being recruited by armed groups. Despite their popularity and presumed impact on beneficiaries, the evidence base of LIPW programs has been surprisingly weak.
 
The Development Impact Evaluation (DIME) unit, in collaboration with the Fragility, Conflict and Violence Cross Cutting Solutions Area (FCV-CSSA) and the Social Protection and Labor Global Practice (SPL-GP), is carrying out a multi-country set of 7 Randomized Control Trials (RCTs) of LIPW programs targeting around 40,000 households across 5 countries: Comoros, the Democratic Republic of Congo, Côte d’Ivoire, Egypt, and Tunisia. This initiative is part of a broader research program on Fragility, Conflict and Violence (FCV) — a portfolio of 35 impact evaluations in over 25 countries that focuses on 5 key priority areas: (i) jobs for the poor and at-risk youth; (ii) public sector governance/civil service reforms; (ii) political economy of post-conflict reconstruction; (iv) gender-based violence; and (v) urban crime and violence.

Pages