Syndicate content

impact

What do we know about the long-term legacy of aid programmes? Very little, so why not go and find out?

Duncan Green's picture

We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?

Orphaned and homeless children being given a non-formal education at a school in IndiaEveryone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking.  What about something more rigorous:  how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:

One option would be to do something like our own Effectiveness Reviews,  but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.

There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.
 

Getting Evaluation Right: A Five Point Plan

Duncan Green's picture

Final (for now) evaluationtastic installment on Oxfam’s attempts to do public warts-and-all evaluations of randomly selected projects. This commentary comes from Dr Jyotsna Puri, Deputy Executive Director and Head of Evaluation of the International Initiative for Impact Evaluation (3ie)

Oxfam’s emphasis on quality evaluations is a step in the right direction. Implementing agencies rarely make an impassioned plea for evidence and rigor in their evidence collection, and worse, they hardly ever publish negative evaluations.  The internal wrangling and pressure to not publish these must have been so high:

  • ‘What will our donors say? How will we justify poor results to our funders and contributors?’
  • ‘It’s suicidal. Our competitors will flaunt these results and donors will flee.’
  • ‘Why must we put these online and why ‘traffic light’ them? Why not just publish the reports, let people wade through them and take away their own messages?’
  • ‘Our field managers will get upset, angry and discouraged when they read these.’
  • ‘These field managers on the ground are our colleagues. We can’t criticize them publicly… where’s the team spirit?’
  • ‘There are so many nuances on the ground. Detractors will mis-use these scores and ignore these ground realities.’

The zeitgeist may indeed be transparency, but few organizations are actually doing it.

How Do You Measure History?

Anne-Katrin Arnold's picture

Over and over again, and then again, and then some more, we get asked about evidence for the role of public opinion for development. Where's the impact? How do we know that the public really plays a role? What's the evidence, and is the effect size significant? Go turn on the television. Go open your newspaper. Go to any news website. Do tell me how we're supposed to put that in numbers.

Here's a thought: maybe the role of public opinion in development is just too big to be measured in those economic units that we mostly use in development? How do you squeeze history into a regression model? Let's have a little fun with this question. Let's assume that
y = b0 + b1x1 + b2x2 + b3x3 + b4x4 + b5x5 + b6x6 + b7x7 + b8(x1x4) + b9(x3x4) + e

Anecdote + Anecdote = Anecdata?

Anne-Katrin Arnold's picture

One of the most difficult barriers in the field of communication and development is the lack of quantitative empirical evidence that demonstrates the effect of communication on development. When we argue that communication is central to development and increases development effectiveness, economists often raise an eyebrow and ask "Where's the data?" It's a legitimate question.