Syndicate content

Four cheers for the “results agenda"

Adam Wagstaff's picture
Photo © Dominic Sansoni / World Bank

The development community hasn’t exactly only just woken up to the fact that development is about achieving something. Projects have had logframes since time immemorial, showing how project activities and spending are expected to lead ultimately to development outcomes—things that matter to people, like health and learning. But the “results agenda” (an agenda that dates back to 2003 but which seems to be gaining momentum) has the scope to be transformative in at least four ways.

1) Work backwards, not forwards
First, it invites us to work backwards from these things that matter and think about alternative ways to achieving these outcomes. Take education. A lot of projects in the Bank and other development agencies have focused on building and rehabilitating schools, with the expectation that this will lead to higher school enrollments. And yet as my colleague Deon Filmer showed a while ago, proximity to a school has very little effect on the likelihood of a child enrolling in school. By contrast, as he and Norbert Schady showed in another paper, providing scholarships to poor children does increase enrollments.

2) Let’s incentivize results
The results agenda invites us then to focus efforts on fixing the demand side, as well as fixing things on the supply side. The latter need not entail bricks and mortar, or even increasing budgets to schools. A study by Karthik Muralidharan and Venkatesh Sundararaman showed that linking teachers pay in Andhra Pradesh to the achievements of the kids enrolled in school had a much larger impact on educational attainment than increasing school budgets. The use of financial incentives to improve performance (“pay for performance” or P4P) is fast becoming popular in the education and health sectors. The Bank’s health sector has a large trust-funded program supporting Bank projects with a P4P component. Early results from Rwanda look promising, though my friend Jack Langenbrunner recently alerted me to a paper that reports rather sobering evidence from the UK’s P4P experience in primary care.

So, bringing in incentives is a second big plus for the results agenda. It’s not just providers that can be incentivized through P4P—governments can too. P4P underpins the proposed new program for results lending instrument that the Bank’s Operations Policy and Country Services unit is crafting, building on innovative lending operations pioneered in the Human Development (HD) sectors in the Latin America and the Caribbean region. The idea is simple: a project would finance results not inputs. This paves the way for a shift of orientation away from inputs, bricks and mortar toward a range of programs (demand- and supply-side) that improve outcomes.

3) Let’s see more statistics
All this forces us to be much more evidence-based. More than this—it calls for evidence linking projects and programs to outcomes. This for me is benefit #3. Assembling this evidence isn’t easy. Impact evaluation (IE) is one very important weapon in our arsenal. My friends in the HD Chief Economist’s office have just launched a wonderful guide to IE methods, along with training materials and an interactive online version of the book. The number of IE studies in the Bank continues to grow apace, thanks largely to the Development IMpact Evaluation (DIME) and Spanish Impact Evaluation Fund (SIEF) initiatives. This work will continue to add to the evidence base and will be an integral element of the results agenda.

But IE has its limitations, and they are not always appreciated. IE needs a group of people that are not affected by the project or program; without such a group, the IE analyst has no hope of forming a “control group”. In many situations, such a group doesn’t exist.

A recent paper of mine looked at the case of two concurrent Bank projects in Vietnam. The Bank focused its efforts on certain provinces, and the government decided to focus its efforts on the remaining ones. The government’s spending allocation in effect changed as a result of the project; it redirected spending toward the non-project provinces. The non-project provinces indirectly benefitted from the Bank project, since government spending on them was higher than it would have been without the Bank projects. They can’t therefore serve as a control group. This is not an uncommon situation. In such a setting the analyst has no choice but to look for alternative methods to estimate the project’s impact; in my case, I simulated the “counterfactual” spending and outcome distributions, a process that requires many heroic assumptions.

Another case where there is no obvious control group is when the government implements a new policy nationwide in one go. Simulation is an obvious option here too. Examples of studies comparing what happened with a simulated counterfactual include Shaohua Chen and Martin Ravallion’s study of China’s accession to the World Trade Organization, and Martin Ravallion and Dominique van de Walle’s study of the privatization of land holdings in Vietnam. Another approach is to look for other countries that have adopted the same reform but at different times, and exploit this staggered implementation to identify the reform’s impacts. I did this when analyzing the introduction of sweeping health sector reforms in the Europe and Central Asia region, including reforms to insurance arrangements and to the way hospitals are paid.

None of these studies is, strictly speaking, an IE study. But without the use of such methods, we can’t estimate reliably the effects of policies and programs that affect everyone in the country. This makes me nervous when aid agencies (including, I admit, the Bank) boast how many sick people were treated, how many lives were saved, etc. as a result of activities that benefitted an entire country. The implicit counterfactual underlying such claims is quite implausible, and IE will typically not help us construct one.

4) Knowledge matters—probably much more than we think
There’s another broad area where IE is unlikely to get us very far forward, and it’s one where the Bank invests a vast amount of resources and quite likely has a very large impact: knowledge.

Bank staff generate new knowledge through their research and Economic and Sector Work (ESW), but also acquire knowledge through their operational work. This formal and tacit knowledge ought to benefit client countries—partly through better-designed projects, but also through the Bank’s role in helping shape policy through policy dialogue, technical assistance, and other knowledge-based activities. The results agenda invites us to take knowledge very seriously—benefit #4.

Getting a sense of the impact of the Bank’s formal and tacit knowledge on outcomes in client countries is a huge challenge. As Martin Ravallion and I found out, even doing an inventory of the Bank’s formal publications is a difficult task, since Bank databases exclude the largest single (and most cited) element of our formal publications—journal articles. Assembling an inventory of formal ESW ought to be easier, but what about “knowledge products” such as briefing notes? And what about the tacit knowledge stuck in the heads of our staff? My impression is that management consultant companies are far more efficient than us at squeezing this knowledge out of the heads of staff and into a searchable database.

But if we don’t know what we know, how will we ever know what our knowledge contributes to development outcomes? And not knowing this will likely mean we miss an important channel through which the Bank achieves results; this channel might well be far more important than the financing channel, especially in middle-income countries. The results agenda pushes firmly down the road of taking stock of what we know and trying to assess its impact on outcomes—that’s got to be good news.

A toast, then, to the results agenda

So I think at least four cheers are in order for the results agenda:

(1) It gets us to work back from outcomes to think about multiple ways of achieving better outcomes, including fixing the demand side;

(2) Linked to this it invites us to focus on ways to incentivize results, whether we’re talking about rewarding clinics with better pay or rewarding countries with larger loans;

(3) The results agenda forces us to get serious about evidence. IE will help here, but it’s not a panacea and needs to be supplemented by other methods, including simulation;

(4) Last, the results agenda invites us to think more seriously about the role of knowledge in the Bank’s work. This ought to force us to get better at capturing tacit knowledge and making knowledge flow better around the institution. It ought to push us toward assessing the impact of our knowledge on development outcomes, whether this occurs through better-designed projects or through helping governments make better policy.

That’s not a bad list!