Syndicate content

Blogs

Informing policy with research that is more than the sum of the parts

Markus Goldstein's picture
Coauthored with Doug Parkerson

A couple of years ago, an influential paper in Science by Banerjee and coauthors looked at the impact of poverty graduation programs across 6 countries.   At the time (and probably since) this was the largest effort to look at the same(ish) intervention in multiple contexts at once – arguably solving the replication problem and proving external validity in one fell swoop.  
 

Incorporating participant welfare and ethics into RCTs

Berk Ozler's picture

One of the standard defenses of an RCT proposal to a skeptic is to invoke budget and implementation capacity constraints and argue that since not everyone will get the desired treatment (at least initially), the fairest way would be to randomly allocate treatment among the target population. While this is true, it is also possible to take into consideration the maximization of participants’ welfare and incorporate their preferences and expected responses to treatment into account while designing an RCT that still satisfies the aims of the researcher (identify unbiased treatment effects with sufficient precision). A recent paper by Yusuke Narita seems to make significant headway in this direction for development economists to take notice.

Weekly links May 11: more on shift-share instruments, updated balance tables, measuring height with photos, and more...

David McKenzie's picture

Growing or fading? The long-run impacts of educational interventions

David Evans's picture
Also available in: Français

Many education investments focus on the first years of primary education or – even before that – early child education. The logic behind this is intuitive: Without a solid foundation, it’s hard for children and youth to gain later skills that use those foundations. If you can’t decipher letters, then it’s going to be tough to learn from a science textbook. Or even a math textbook. But it’s important to remember that for most “investors” (whether governments or parents or the children themselves), the most basic skills aren’t the ultimate goal. The objective is better life outcomes. Most of the justification for these early interventions are that they will translate into better lives once these children grow up.

Having an impact as a development economist outside of a research university: interview with Evan Borkum of Mathematica

David McKenzie's picture
Today’s installment in this occasional series on how to use your development economics PhD outside of a research university is with Evan Borkum, a senior researcher in the International Research Division of Mathematica Policy Research Inc.


DI: Please provide a short paragraph describing what you do in this job, and give us a sense of what a typical day or week might look like for you. My job is to conduct independent rigorous impact and performance evaluations of social programs in developing countries. Most of this work is conducted under contract to US government agencies (mostly MCC and USAID) and various foundations, who issue requests for proposals to evaluate their programs. In my eight years at Mathematica I’ve worked on evaluations in Asia, Africa, and Eastern Europe, and in topic areas including agriculture, primary education, vocational training, maternal and child health, land, and others. As senior researcher on an evaluation team I’m typically responsible for technical leadership of all aspects of an evaluation, including study design, data collection, and final analysis and reporting. Last week was fairly typical and included work on designing a randomized controlled trial of an anti-child labor program, drafting a quantitative survey of vocational education students, and planning the analysis of survey data from farmers in Morocco.

Weekly links May 4: has your study been abducted? Noisy risk preferences, why we should be cautious extrapolating climate change and growth estimates, and more...

David McKenzie's picture
  • Excellent tradetalks podcast with Dave Donaldson has a detailed discussion with him on his work looking at the impact of railroads on development in India and in U.S. economic history.
  • The latest Journal of Economic Perspectives includes:
    •  Acemoglu provides a summary of Donaldson’s work that led to him receiving the Bates Clark medal
    • Several papers on risk preferences, including discussion of whether risk preferences are stable and how to think about them if they are not (interesting sidenote in this is a comment on how much measurement error there is when using incentivized lotteries – the correlations between risk premia measured for the same individual using different experimental choices can be quite low, and correlations tend to be higher for survey measures – and speculation that the measurement error may be worse in developing countries “large share of the papers that document contradictory effects of violent conflict or natural disasters use experimental data from developing countries, but these tools were typically developed in the context of high-income countries. They may be more likely to produce noisy results in samples that are less educated, partly illiterate, or less used to abstract thinking)
    • a series of papers on how much the U.S. gains from trade

Maybe Money does Grow on Trees

Arianna Legovini's picture
Environmental degradation puts livelihoods at risk and the Ghanaian government is determined to fight it. Planting trees is one approach to address soil erosion, topsoil quality and overgrowth of weeds and grass that lead to wildfire. This is why the World Bank’s Sustainable Land and Water Management Project (SLWMP) offers free seedlings to farmers to plant trees at a cost of about $100 per farmer. The question researchers asked at the time of project design was, would free seedlings be enough?

Rethinking identification under the Bartik Shift-Share Instrument

David McKenzie's picture
While it has been said that “friends don’t let friends use IV”, one exception has been the Bartik or shift-share instrument. Development economists tend to see these instruments used most in the trade and migration literatures, with Jaeger et al. (2018) noting that “it is difficult to overstate the importance of this instrument for research on immigration.

Weekly Links April 27: improving water conservation, acceptance rates drop below 3%, using pre-analysis plans for observational data, and more...

David McKenzie's picture

Pitfalls of Patient Satisfaction Surveys and How to Avoid Them

David Evans's picture

A child has a fever. Her father rushes to his community’s clinic, his daughter in his arms. He waits. A nurse asks him questions and examines his child. She gives him advice and perhaps a prescription to get filled at a pharmacy. He leaves.

How do we measure the quality of care that this father and his daughter received? There are many ingredients: Was the clinic open? Was a nurse present? Was the patient attended to swiftly? Did the nurse know what she was talking about? Did she have access to needed equipment and supplies?

Both health systems and researchers have made efforts to measure the quality of each of these ingredients, with a range of tools. Interviewers pose hypothetical situations to doctors and nurses to test their knowledge. Inspectors examine the cleanliness and organization of the facility, or they make surprise visits to measure health worker attendance. Actors posing as patients test both the knowledge and the effort of health workers.

But – you might say – that all seems quite costly (it is) and complicated (it is). Why not just ask the patients about their experience? Enter the “patient satisfaction survey,” which goes back at least to the 1980s in a clearly recognizable form. (I’m sure someone has been asking about patient satisfaction in some form for as long as there have been medical providers.) Patient satisfaction surveys have pros and cons. On the pro side, health care is a service, and a better delivered service should result in higher patient satisfaction. If this is true, then patient satisfaction could be a useful summary measure, capturing an array of elements of the service – were you treated with respect? did you have to wait too long? On the con side, patients may not be able to gauge key elements of the service (is the health professional giving good advice?), or they may value services that are not medically recommended (just give me a shot, nurse!).

Two recently published studies in Nigeria provide evidence that both gives pause to our use of patient satisfaction surveys and points to better ways forward. Here is what we’ve learned:

Pages