Published on Development Impact

Power to the people? Taking a look at community driven reconstruction in the DRC

This page in:

Given the Bank's recent release of a report on community driven development and a recent New York Times article about the intractability of peace in the Democratic Republic of the Congo, I thought it would be worth looking at a recent paper by Macarten Humphreys, Raul Sanchez de la Sierra, and Peter van der Windt on a community driven reconstruction program in the DRC.  

This paper is interesting not only in results, but also for the measurement tools it uses (it clocks in at a hefty 78 pages, but they use this space well to discuss a lot of measurement and methodological issues).   I'll turn to those more in a minute, but first let's take a look at the intervention.

The intervention they are evaluating is called Tuungane, a UK funded and IRC and CARE implemented, community driven reconstruction program.   Covering over 1200 villages, the stage of the program that Humphreys and co. look at, starts by setting up governance structures.   These are elected village development committees, with complementary training in leadership, governance and social inclusion.   These committees then work with the population to select and execute a project.   Most of these projects, it turns out, are school and health clinic construction/rehabilitation.   The idea here is that this process -- of election and training, plus a project, can help change how government works in these villages and thus improve welfare.   They also throw in an intriguing variant: for some of the villages, they lift a gender parity requirement from the village development committee. 

To tackle the question of whether this community driven reconstruction leads to improved welfare, Humphreys and co. work with the implementers to set up a pretty large randomized trial -- covering 600 community development committee areas (this is one organizational unit up from the village development committee areas).   The randomization was done through public lotteries (and the paper has a nice discussion of the politics and methodological implications of public vs. private lotteries for randomization). In addition, the gender parity requirement is also randomly removed across a subset of villages. 

It's worth noting a couple of neat things about their evaluation design.   First, they spelled out what they were going to look at at the beginning of the work, and then later registered the trial and provided a mock report that the results would ultimately comply with. Second, it's hard to measure governance outcomes.   One could ask people if they thought there was more corruption or if their preferences were reflected in what government does.   But then you run the risk of getting people trying to be nice (or not) to the survey team (indeed -- it looks like Humphreys and co. set up one survey to look directly at this question of social desirability bias and I look forward to seeing the results of this).   To avoid this problem across their outcomes, what Humphreys and co. did was to run another intervention called RAPID, which as divorced as possible from the intervention being evaluated and which would allow them to measure actual behavioral responses rather than beliefs or opinions. In this measurement intervention, they give a $1000 unconditional grant to a subset of the treatment and control villages.   This will let them look at how villages deal with the management of the money in terms of participation, accountability, efficiency and the like. And they also pull a neat trick by telling folks in the village that the grant would be at least $900. This will let them check out what the village leaders tell them about what actually showed up.  

So what do they find?   Overall, this is a case of a program without significant impacts.   Let me clarify that this doesn't seem to be power issues (they have a pretty huge sample), nor a measurement problem (they use a host of really creative data collection tools -- going way beyond the bog standard household surveys) nor a failure of implementation.   They do have a problem with attrition for their main survey -- losing 28% of villages and 38% of individuals.   A chunk of this goes to the conditions of working in the not-so-ex-conflict areas of the DRC -- in one area the survey teams were expelled due to rising political tensions, in others security concerns meant that teams couldn't go at all.   In the end though, this is not driving the lack of significant results (more explanations will follow below). 

The first class of impacts they look at is participation: meeting turnout (using their RAPID intervention), discussion dynamics, how projects are selected, who decides, and the like.   Nothing much here.   Similarly for accountability and efficiency. With transparency, they are trying to see how much villagers know about the $1000 RAPID unconditional grant.   So 38% of the villagers know the amount to be $1000 -- but this is not statistically significantly different across treatment and controls. Humphreys and co. then send out a posse of auditors to see what actually happened to this money.   The auditors were able to track down an average of $850 per village -- but this is no better in the program areas.   Again, no significant program effects in terms of trust, cohesion and within-village cooperation. Not surprisingly, they also don't find any significant welfare impacts.

The results for lifting the gender parity quota are in a similar vein (and thereby interesting for different reasons). First, in villages without the gender parity quota, women ended up comprising about 30% of the village committees. These villages were somewhat more likely to choose water and sanitation interventions than were villages without the quota.   Overall, though (and in contrast to evidence from India I talked about in an earlier post), the quotas don't change attitudes towards women.  

So this program didn't do much (in a well measured, well identified sense). What happened?   One explanation could be that there are heterogeneous underlying effects, and these average impacts don't pick these up. Stay tuned for this analysis (since this paper adheres strictly to the pre-analysis plan).   Second, Humphreys and co. could be focusing on the wrong outcomes. Probably not -- they seem to be covering what the program aimed at.   Maybe  then the follow-up was too soon. I don't know how long it takes these kind of indicators to change -- but four years or so (which is their timeframe) seems like it ought to be enough (although Humphreys and co. make an interesting argument that maybe social doesn't move until economic benefits really come in -- and the big spending in this program was yet to come). So then maybe the program was too small (in terms of per capita spending) or was pitched at the wrong level or maybe this wasn't the right way to tackle this problem in the DRC.   Further research is definitely needed, but these initial results are pretty informative and give plenty of food for thought. 


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000