Syndicate content

Time to Put Institutions at the Center of Community Driven Development (CDD)?

Janmejay Singh's picture

Community driven development (CDD) has been a key operational strategy supported by the World Bank for more than a decade – averaging about $2 billion in lending every year and now covering more than 80 countries. By emphasizing empowerment and putting resources in the direct control of community groups, CDD’s rapid spread stems from its promise of achieving inclusive and sustainable poverty reduction. Yet despite its popularity, evidence on whether these programs work still remains limited and scattered. Recently, two significant efforts have been made by the Bank to pull together the different strands of evidence there is on CDD and provide a summary picture of what we know and what we don’t (please see What Have Been the Impacts of World Bank Community-Driven Program? and Localizing Development – Does Participation Work?). The reviews find on the positive end that CDD-type programs, when implemented properly, do well on delivering service delivery outcomes in sectors like health and education, improve resource sustainability, and help in constructing lower cost and better quality infrastructure.

However, on the negative end, perhaps the most alarming conclusion that both reports share is around the lack of much, if any, positive impact of CDD and participatory programs on social capital, cohesion, and empowerment. This is clearly ironic, given that a major premise behind using a CDD approach is its ability to empower and foster greater trust, agency, and collective action.

So why are we not seeing any social capital impacts? Part of the problem is measurement and attribution. As both Wong and Mansuri & Rao note, measuring social outcomes like ‘empowerment’ and ‘social capital’ is challenging. Simple perception questions on things like ‘trust’ may not be reliable. And isolating the impact of programs on social outcomes is tough even in the best-designed evaluations.

Timeframes for evaluations may also be an issue. Most evaluations only run for a few years, whereas impacts on entrenched social variables like trust may require much longer timeframes for measurement. Moreover, institutional change trajectories may not be linear or path-dependent – things could worsen before they get better (e.g. trust may initially dip when communities get access to information on budgets and expenditures they were hitherto denied) and the ‘time lags’ between interventions and outcomes can be quite long.

Design matters too – a lot! Simple design features such as repeated cycle of grants, the amount spent per capita, the extent of participatory planning, the overall context (in terms of for instance the degree of decentralization) and features like quotas for marginalized groups in decision making bodies seem to play a big role in determining whether social outcomes are achieved.

But for me, the issue really boils down to the type of CDD model being used. In the Bank, we often use the term ‘CDD’ as if it’s a simple, cookie-cutter, project type that’s uniformly applied across all regions. A quick glance at the portfolio of projects in the CDD database would, however, tell you that this is not the case. As Wong notes, many early CDD programs were ‘ring-fenced’ and purposely set up as ‘temporary’ programs to deliver ‘on-time’ sub-projects often in the wake of crises, conflicts, or disasters. Thus social capital was never really a focus. This first generation CDD model - which involve at best 3-4 local planning meetings in a year, one sub-project cycle, and project implementation outsourced/managed by contractors – can hardly be expected to make social capital budge. Put differently – if your focus is physical infrastructure and not social infrastructure, then why expect returns on the latter? It’s not surprising, therefore, that evaluations of these types of CDD programs don’t show much impact on the ‘social capital’ front.

Contrast this to the rural livelihoods programs in South Asia, and specifically India, such as the recent $1billion India National Rural Livelihoods Project. These CDD programs have as their main focus the “building of institutional platforms for the poor”. They do this by investing in the creation of women’s self-help groups (SHGs) of 10-12 women and then federating these to higher and higher levels like village, block, and district. Next, these institutions of the poor are linked markets (for economic inclusion), to commercial banks (for financial inclusion) and to local governments and sector agencies (for political participation and last mile service delivery).The philosophy here is quite explicitly to convert the ‘social capital’ of the poor into economic, financial, and political capital. These SHGs meet once a week, inter-lend savings among themselves, and govern themselves by democratic rules that are routinely monitored. In Tamil Nadu, India, they even pledge an allegiance to these principles before every group meeting (see photo)! Clearly the level of social engagement and investment in institution building here is of a different order of magnitude. And the anecdotal evidence is that wherever CDD programs have put effort into building community institutions – like they have in South Asia and parts of East Asia and Latin America – they would have definitely impacted social outcomes like ‘social capital’ and ‘empowerment’.

So the lesson in all this is that to get both development and ‘social capital’ impacts, CDD programs need to focus much more earnestly at strengthening the community organizations and groups that they support. And the ‘science of delivery’ (to borrow WB President Kim’s terminology) for such grassroots institution building isn’t simple nor well understood. But at minimum, it means one needs to start by putting institutions at the center of community driven development!
 

Photo credits: World Bank 

Follow PublicSphereWB on Twitter

Add new comment