Syndicate content


Payment by results in aid: hype or hope?

Duncan Green's picture

Is payment by results just the most recent over-hyped solution for development, or is it an effective incentive for accelerating change?

Madeleine has a 17 month-old daughter who was born at the village's primary health facilityWhen reading up on payment by results (PbR) recently I was struck by the contrast between how quickly it has spread through the aid world and how little evidence there is that it actually works.

In a way, this is unavoidable with a new idea – you make the case for it based on theory, then you implement, then you test and improve or abandon. In this case the theory, ably argued by Center for Global Development (CGD) and others, was that PbR aligns incentives in developing country governments with development outcomes, and encourages innovation, since it does not specify how to, for example, reduce maternal mortality, merely rewards governments when they achieve it.

Those arguments have certainly persuaded a bunch of donors. The UK government (pdf) says that this “new form of financing that makes payments contingent on the independent verification of results ... is a cross government reform priority”. The UK’s department for international development (DfID) called its 2014 PbR strategy Sharpening Incentives to Perform (pdf) and promised to make it “a major part of the way DfID works in future”. David Cameron, the British prime minister, waxes lyrical on the topic.

But I seem to be coming up against a long list of potential problems with PbR. Let’s start with Paul Clist and Stefan Dercon: 12 Principles for PbR in International Development (pdf), who set out a series of situations in which PbR is either unsuitable or likely to backfire. For example if results cannot be unambiguously measured, lawyers are going to have a field day when a donor tries to refuse payment by arguing they haven’t been achieved. They also make the point that PbR makes no sense if the recipient government already wants to achieve a certain goal – then you should just give them the money up front and let them get on with it.

What do we know about the long-term legacy of aid programmes? Very little, so why not go and find out?

Duncan Green's picture

We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?

Orphaned and homeless children being given a non-formal education at a school in IndiaEveryone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking.  What about something more rigorous:  how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:

One option would be to do something like our own Effectiveness Reviews,  but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.

There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.

Getting Evaluation Right: A Five Point Plan

Duncan Green's picture

Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects. This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie)

Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make an impassioned plea for evidence and rigor in their evidence collection, and worse, they hardly ever publish negative evaluations.  The internal wrangling and pressure to not publish these must have been so high:

  • ‘What will our donors say? How will we justify poor results to our funders and contributors?’
  • ‘It’s suicidal. Our competitors will flaunt these results and donors will flee.’
  • ‘Why must we put these online and why ‘traffic light’ them? Why not just publish the reports, let people wade through them and take away their own messages?’
  • ‘Our field managers will get upset, angry and discouraged when they read these.’
  • ‘These field managers on the ground are our colleagues. We can’t criticize them publicly… where’s the team spirit?’
  • ‘There are so many nuances on the ground. Detractors will mis-use these scores and ignore these ground realities.’

The zeitgeist may indeed be transparency, but few organizations are actually doing it.

How Do You Measure History?

Anne-Katrin Arnold's picture

Over and over again, and then again, and then some more, we get asked about evidence for the role of public opinion for development. Where's the impact? How do we know that the public really plays a role? What's the evidence, and is the effect size significant? Go turn on the television. Go open your newspaper. Go to any news website. Do tell me how we're supposed to put that in numbers.

Here's a thought: maybe the role of public opinion in development is just too big to be measured in those economic units that we mostly use in development? How do you squeeze history into a regression model? Let's have a little fun with this question. Let's assume that
y = b0 + b1x1 + b2x2 + b3x3 + b4x4 + b5x5 + b6x6 + b7x7 + b8(x1x4) + b9(x3x4) + e

Anecdote + Anecdote = Anecdata?

Anne-Katrin Arnold's picture

One of the most difficult barriers in the field of communication and development is the lack of quantitative empirical evidence that demonstrates the effect of communication on development. When we argue that communication is central to development and increases development effectiveness, economists often raise an eyebrow and ask "Where's the data?" It's a legitimate question. And it's a question we don't have an answer to - yet.