Published on Let's Talk Development

How can the Knowledge Bank make development more effective?

This page in:

Like all other development agencies, the World Bank has few systematic ways to measure, track or even recognize the effectiveness of its work. Instead, stakeholders are more likely to insist on fiduciary oversight and lending volumes; management is more accountable for meeting lending targets and upholding administrative requirements than meeting development goals; and approvals of Bank projects and country partnership strategies – not surprisingly – are rarely based on explicit analyses of their development effectiveness.
 
None of this is new.  Enhancing “development effectiveness” emerged as a key concern in a recent review of the World Bank’s governance structure, for example, but similar concerns have been expressed at least since the Wapenhans Report twenty years ago. What is new is the energy surrounding current efforts to put development effectiveness at the center of Bank operations. But doing this means confronting the essential problem that there is no cookbook for development. Whether we care about “big” development – tripling incomes per capita in Malawi over the next 15 years – or “little” development – improving health outcomes for rural women in Orissa this year by expanding access to cooking stoves – some things we think work actually do work, at least under certain conditions; other things we only think work, when in fact we have no evidence either way; and we are fairly sure that even all the things we know (or suspect) work will only get us part-way towards our development goals.

Development effectiveness, then, is not just about setting goals and diligently organizing them into SAP (the Bank’s management information system). Bringing development effectiveness to center stage also requires learning about what works, early and continuously.  Current proposals emphasize learning through the exchange of experiences among practitioners.  But practitioners also need access to systematic and rigorous analysis that assesses whether, how and under what conditions interventions work. Development effectiveness, then, means admitting our ignorance – we don’t know if this will work, we will only know if we try, and we have a solid rationale for experimenting and a plan to find out if we are right. 

How can a renewed focus on learning about what works be promoted throughout the project cycle to improve development effectiveness? How can incentives – from the ranks of upper management down to task team leaders – be altered to encourage such learning? Our perspective is that development effectiveness will be enhanced if staff rely more (and more systematically) on past research, experience, and evidence in the design, implementation, monitoring and evaluation of development interventions. This will only work, given existing budget constraints and the already enormous demands placed on Bank staff, if task team leaders are relieved of enormous administrative requirements and inadequate supervision budgets that themselves often lack a clear rationale. 

From the outset, three key questions should guide the design of country partnership strategies, programs and projects. First, do the set of proposed interventions for a country address key development bottlenecks? Second, are the proposed interventions feasible to implement given the country context? And third, what makes the set of proposed interventions better than potential alternatives? Numerous studies show that contexts are crucial determinants of development effectiveness (see, among many others, Barron, et al. and Mansuri and Rao). If development effectiveness is to be the central focus of our work, therefore, we should – and peer reviewers must – examine how the country context (the economic, historical, political and social characteristics relevant to the activities in question) is likely to influence country partnership strategies, project choices and project designs.

Development effectiveness also requires that proposed interventions withstand critical and independent scrutiny of their technical merits. Is the development logic underlying a proposed curriculum reform, or a change in financial regulation, or a power sector infrastructure investment, supported by rigorous analysis? Is it carefully informed by research and past experience? As Martin Ravallion argued last year, an Ex-ante Survey of Knowledge on Impact (ESKI) should be an integral component of this early stage of decision-making.

The menu of interventions that demonstrably promote development does not comprise all that we need to do to actually achieve development. To narrow this gap, experimentation and new ideas are essential. Development agencies like the Bank should continue to encourage staff to pursue genuinely new and innovative approaches – or old approaches that are untested in particular country contexts. But, in contrast to current practice, innovation should be tethered to rigorous development logic: what problem is being solved for? which tested approaches are available? what leads innovators to think their solution may work better and can be implemented? what system will they establish so that we can more confidently learn whether the innovation works or not?
 
Monitoring and evaluation are therefore central to learning. Recognizing this, detailed plans for monitoring, evaluation and learning should be an integral component of project design, with the best ones publicly celebrated and rewarded. The plan should have inbuilt mechanisms that support continuous learning and allow for mid-course adjustments in implementation strategies. Indeed, mid-term reviews of projects are an important but currently neglected vehicle for learning; incentives are currently stacked against making mid-course corrections, with Task Team Leaders (TTLs) who learn from mistakes and change design mid-course often deemed ineffective.

Responding to these issues is far from trivial. To enable a greater focus on learning and development effectiveness, resources will need to shift from other project preparation priorities (each with strong constituencies). And peer reviewing will need to be re-structured: reviewers would now make explicit whether the development effectiveness of the proposed intervention is sufficiently documented, given country circumstances. Doing this well is a substantial task, so peer reviewers would need to be widely-recognized for their expertise, and their comments would be part of the record that goes to the Regional Operations Committee/Board.

Finally, staff need to be rewarded for their contribution to learning about what works in development. Having the right people in place – people who know how to implement, but also “know how to learn” – is essential to the development enterprise. Recent research by Denizer, et al., assessing the efficacy of over 6000 Bank projects, shows that task manager characteristics matter as much as country characteristics in shaping project performance. To require, recognize and reward high quality implementation, especially in interventions that of necessity require considerable adaptation to local context, the Bank will need to overtly reward staff who develop useful and useable monitoring systems and promote learning and experimentation.

One final consideration: task team leaders and individual sector managers should not be the sole or even primary focus of attention for enhancing the effectiveness of development interventions. TTLs cannot be expected to pursue development effectiveness and to learn what works unless incentives at higher levels also change.


Authors

Philip Keefer

Principal Advisor, Institutions for Development at Inter-American Development Bank

Ghazala Mansuri

Lead Economist, Poverty Reduction & Equity Group, World Bank

Vijayendra Rao

Lead Economist, Development Research Group, World Bank

Michael Woolcock

Lead Social Scientist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000