Syndicate content

Cash transfers and health: It matters when you measure, and it matters how many health care workers are around to provide services

David Evans's picture

This post was co-authored with Katrina Kosec of IFPRI.

A whirlwind, surely incomplete tour of cash transfer impacts on health
Your run-of-the-mill conditional cash transfer (CCT) program has significant impacts on health-seeking behavior. Specifically, there are conditions (or co-responsibilities, if you prefer) that children get to school and/or that they get vaccinated or have some wellness visits. While the school enrollment effects are well established, the effects on both health seeking behavior and on health outcomes have been much more mixed. CCTs have led to better child nutritional status and improved child cognitive development in Nicaragua, better nutritional outcomes for a subset of children in Colombia, and had no impacts for child health in studies on Brazil and Honduras. CCTs conditioned only on school enrollment did not lower HIV infections among adolescent girls in South Africa; and in Indonesia CCTs increased health visits but did not translate into measurably improved health. Unconditional cash transfer programs have also had mixed results on health, with better mental health and food consumption in Kenya, better anthropometric outcomes for girls (not boys) in South Africa, no average impacts (although some for the poorest quarter) on child outcomes in Ecuador, and no average impacts on maternal health care utilization in Zambia (albeit yes effects for women with better access to such services).

Lessons from some of my evaluation failures: Part 2 of ?

David McKenzie's picture

I recently shared five failures from some of my impact evaluations. Since this is just scratching the surface of all the many ways I’ve experienced failures in attempting to conduct impact evaluations, I thought I’d share a second batch now too.

Case 4: working with a private bank in Uganda to offer business training to their clients, written up as a note here.

If you want your study included in a systematic review, this is what you should report

David Evans's picture


This post is co-authored with Birte Snilstveit of 3ie
 
Impact evaluation evidence continues to accumulate, and policy makers need to understand the range of evidence, not just individual studies. Across all sectors of international development, systematic reviews and meta-analysis (the statistical analysis used in many systematic reviews) are increasingly used to synthesize the evidence on the effects of programmes. These reviews aim to identify all available impact evaluations on a particular topic, critically appraise studies, extract detailed data on interventions, contexts, and results, and then synthesize these data to identify generalizable and context-specific findings about the effects of interventions. (We’ve both worked on this, see here and here.)
 
But as anyone who has ever attempted to do a systematic review will know, getting key information from included studies can often be like looking for a needle in a haystack. Sometimes this is because the information is simply not provided, and other times it is because of unclear reporting. As a result, researchers spend a long time trying to get the necessary data, often contacting authors to request more details. Often the authors themselves have trouble tracking down some additional statistic from a study they wrote years ago. In some cases, study results can simply not be included in reviews because of a lack of information.

CCTs for Pees: Cash Transfers Halloween Edition

Berk Ozler's picture

Subsidies to increase utilization are used in all sorts of fields and I have read more than my fair share of CCT papers. However, until last week, I had not come across a scheme that paid people to purchase their urine. Given that I am traveling and the fact that I am missing Halloween, I thought I’d share (I hope it’s not TMI)…
Here is the abstract of an article by Tilley and Günther (2016), published in Sustainability:
In the developing world, having access to a toilet does not necessarily imply use: infrequent or non-use limits the desired health outcomes of improved sanitation. We examine the sanitation situation in a rural part of South Africa where recipients of novel, waterless “urine-diverting dry toilets” are not regularly using them. In order to determine if small, conditional cash transfers (CCT) could motivate families to use their toilets more, we paid for urine via different incentive-based interventions: two were based on volumetric pricing and the third was a flat-rate payment (irrespective of volume). A flat-rate payment (approx. €1) resulted in the highest rates of regular (weekly) participation at 59%. The low volumetric payment (approx. €0.05/L) led to regular participation rates of only 12% and no increase in toilet use. The high volumetric payment (approx. €0.1/L) resulted in lower rates of regular participation (35%), but increased the average urine production per household per day by 74%. As a first example of conditional cash transfers being used in the sanitation sector, we show that they are an accepted and effective tool for increasing toilet use, while putting small cash payments in the hands of poor, largely unemployed populations in rural South Africa.”
 

Weekly links October 28: the platinum development intervention, super long-run cash effects, in praise of uncivil discussion, and more…

David McKenzie's picture
  • The platinum development intervention: Lant Pritchett on how the gold standard ultra-poor poverty programs don’t stack up very well against migration.
  • Cash effects after 40 years: The long-term impacts of cash transfers in the U.S. – Wonkblog covers a new working paper (and job market paper from a Stanford student David Price) on the income maintenance experiments  that took place four decades ago – they find those who received the assistance retired earlier, as a result making less money over their careers – while there appears to be no long-term impacts on children (for what they can measure using admin data).

It’s that Time of the Year: Submissions now open for our annual “Blog your job market paper” series

David McKenzie's picture

We are pleased to launch for the sixth year a call for PhD students on the job market to blog their job market paper on the Development Impact blog.  We welcome blog posts on anything related to empirical development work, impact evaluation, or measurement. For examples, you can see posts from 2015, 20142013 and 2012. We will follow a similar process as previous years, which is as follows:

We will start accepting submissions immediately until midnight EST on Tuesday, November 22, with the goal of publishing a couple before Thanksgiving and then about 6-8 more in December when people are deciding who to interview. We will not accept any submissions after the deadline (no exceptions). As with last year, we will do some refereeing to decide which to include on the basis of interest, how well written they are, and fit with the blog. Your chances of being accepted are likely to be somewhat higher if you submit earlier rather than waiting until the absolute last minute of our deadline.

More replication in economics?

Markus Goldstein's picture
About a year ago, I blogged on a paper that had tried to replicate results on 61 papers in economics and found that in 51% of the cases, they couldn’t get the same result.   In the meantime, someone brought to my attention a paper that takes a wider sample and also makes us think about what “replication” is, so I thought it would be worth looking at those results.  
 

Lessons from some of my evaluation failures: Part 1 of ?

David McKenzie's picture

We’ve yet to receive much in the way of submissions to our learning from failure series, so I thought I’d share some of my trials and tribulations, and what I’ve learnt along the way. Some of this comes back to how much you need to sweat the small stuff versus delegate and preserve your time for bigger picture thinking (which I discussed in this post on whether IE is O-ring or knowledge hierarchy production). But this presumes you have a choice on what you do yourself, when often in dealing with governments and multiple layers of bureaucracy, the problem is your potential for micro-management can be less in the first place. Here are a few, and I can share more in other posts.

Weekly links October 21: Deaton on doing research, flexible work schedules, 17,000 minimum wage changes, and more…

David McKenzie's picture
  • Three questions with Angus Deaton – why diversity in researchers is good and directed research can be bad “Everyone of us has a different upbringing. Many people in economics now come from many countries around the world, they have different political views and political backgrounds. There’s a whole different social culture, and so on. I think economics in the United States has changed immeasurably in the last 30 years and been enormously enriched by that diversity with people coming from all over the world. That will only work if people bring with them the stuff they had when they were children or the stuff they did in college, the passions they had early on. Either smash them to pieces in the face of the data and see your professors like me telling them to do something else or turn them into something really valuable. So, don’t lose your unique value contributions. Stick to what is really important to you and try to research that” (h/t Berk).
  • Chris Blattman on the hidden price of risky research, particularly for women.

Pages