Syndicate content


What do we know about the long-term legacy of aid programmes? Very little, so why not go and find out?

Duncan Green's picture

We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?

Orphaned and homeless children being given a non-formal education at a school in IndiaEveryone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking.  What about something more rigorous:  how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:

One option would be to do something like our own Effectiveness Reviews,  but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.

There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.

New thinking on digital storytelling

Maya Brahmam's picture

I have been reading with interest some of the questions posed on storytelling inside the World Bank. The recent blog post by Bruce Wydick is a case in point. Reactions ranged from positive to some uneasiness around the idea that we’re using stories to share results, when we’re generally more comfortable with a “Just the facts” approach. One concern seems to be that we might surrender our decision-making to the emotion of a good story versus hard evidence.

In fact, doesn’t the word fabulist mean someone who stretches the truth a bit, by telling stories? I was therefore not so surprised to find storytelling used as an explanation for NBC news anchor Brian Williams’ recent troubles. A Washington Post article about Williams noted, “Former colleagues reveal a man who took such delight in spinning yarns that he could sometimes lose sight of where the truth began and where it ended.”

We have examined brain science and other areas to figure out why stories are so compelling, and I’ve blogged about this in this space before. Storytelling is compelling because it’s memorable, shareable (nice feature in this digital world), and relatable (people respond and retain for longer material with an emotional content).

How Can Complexity and Systems Thinking End Malaria?

Duncan Green's picture

This is complexity week on the blog, pegged to the launch of Ben Ramalingam’s big new book ‘Aid on the Edge of Chaos’ at the ODI on Wednesday (I get to be a discussant – maximum airtime for least preparation. Result.)

So let’s start with a taster from the book that works nicely as a riposte to all those people who say (sometimes with justification, I admit) that banging on about complexity is just a lot of intellectual self-indulgence (sometimes they’re not so polite). We know what works, why complicate things? Hmmm, read on:

‘Kenya’s Mwea region is especially prone to malaria because it is an important rice-growing region, and large paddies provide an ideal breeding ground and habitat for mosquitoes. The application of insecticides and anti-malarial drugs has been widespread, but there has been a marked rise in resistance among both mosquitoes and the parasites themselves.

A multidisciplinary team developed and launched an eco-health project, employing and training community members as local researchers, whose first task was to conduct interviews across four villages in the region, to give a first view of the malaria ‘system’ from the perspective of those most affected by it.

The factors involved were almost dizzyingly large in number—from history, to social background, to political conflicts. A subsequent evaluation of the programme referred to this as an admirable feat of analysis.

Using a systems analysis approach that placed malaria in the wider ecological context was a critical part of the programme design:

What is 'Leverage' (NGO-Speak Version) and Why Does it Matter?

Duncan Green's picture

A few weeks ago I attended the twice yearly gathering of Oxfam GB’s big cheeses – the regional directors, Oxford bosses and a smattering of more exotic cheeses from other Oxfam affiliates (Australia and US this time). We started off with a tour of the regions –  what’s on their minds? 3 common themes emerged: political upheaval (disenchantment with elected governments,  protest, the threat of civil war); religious conflict (fundamentalism) and rising inequality.

The topic of this meeting was a classic new fuzzword – ‘leverage’. And like all good fuzzwords, it was frustrating and helpful in equal measure. Frustrating in its hard-to-define slipperiness, helpful because it establishes a fuzzy-boundaried arena of conversation that allowed us to have an interesting exchange.

The overriding purpose of leverage is another bit of management jargon: ‘going to scale’. How to influence bigger players to reach many times more people than you would do by acting alone? The ambition is heroic, perhaps crushing on occasion – with your few thousand (or even million) quid, it’s not enough to just help a few hundred people, you have to think how this can transform lives en masse. I suspect it partly stems from frustration born from aiming too low; partly from the push for results.

Moneyballing Development: A Challenge to our Collective Wisdom of Project Funding

Tanya Gupta's picture
The biggest promise of technology in development is, perhaps, that it can provide us access to consistent, actionable and reliable data on investments and results.  However, somewhat shockingly, we in development have not fully capitalized on this promise as compared to the private sector.  Would you invest your precious pension hoping you will get something back but without having any reliable data on the rate of return or how risky your investment is?  If you have two job applicants, one who is a methamphetamine addict and the other is one who has a solid work history and great references, would you give equal preference to both?  If your answer to either is no, then take a look at the field of international development and consider the following:
  • Surprising lack of consistent, reliable data on development effectiveness: Among the various sectoral interventions, we have no uniformly reliable data on the effectiveness of every dollar spent.  For example of every dollar spent in infrastructure programs in sub-Saharan Africa, how many cents are effective? Based on the same assumptions, do we have a comparable number for South East Asia? In other words why don’t we have more data on possible development investments and the associated costs, benefits/returns and risks?
  • Failure to look at development effectiveness evidence at the planning stage: Very few development programs look at the effectiveness evidence before the selection of a particular intervention.  Say, a sectoral intervention A in a particular region has a history of positive outcomes (due to attributable factors such as well performing implementation agencies) as opposed to another intervention B where chances of improved outcomes are foggy.  Given the same needs (roughly) why shouldn’t we route funds to A instead of B in the planning stage? Why should we give equal preference to both based purely on need?

Aid and Complex Systems cont’d: Timelines, Incubation Periods and Results

Duncan Green's picture

I’m at one of those moments where all conversations seem to link to each other, I see complex systems everywhere, and I’m wondering whether I’m starting to lose my marbles. Happily, lots of other people seem to be suffering from the same condition, and a bunch of us met up earlier this week with Matt Andrews, who was in the UK to promote his fab new book Limits to Institutional Reform in Development (I  rave reviewed it here). The conversation was held under Chatham House rules, so no names, no institutions etc.

Whether you work on complex systems or governance reform or fragile states, the emerging common ground seems to be around what not to do and to a lesser extent, the ‘so whats’. What can outsiders do to contribute to change in complex, unpredictable situations where, whether due to domestic opposition or sheer irrelevance to actual context, imported blueprints and ‘best practice guidelines’ are unlikely to get anywhere?

In his book Matt boils down his considerable experience at the World Bank and Harvard into a proposal for ‘PDIA’ – Problem Driven iterative adaptation, which I described pretty fully in my review. The conversation this week fleshed out that approach and added some interesting new angles.

How to Plan When You Don’t Know What is Going to Happen? Redesigning Aid for Complex Systems

Duncan Green's picture

They’re funny things, speaker tours. On the face of it, you go from venue to venue, churning out the same presentation – more wonk-n-roll than rock-n-roll. But you are also testing your arguments, adding slides where there are holes, deleting ones that don’t work. Before long the talk has morphed into something very different.

So where did I end up after my most recent attempt to promote FP2P in the US and Canada? The basic talk is still ‘What’s Hot and What’s Not in Development’ – the title I’ve used in UK, India, South Africa etc. But the content has evolved. In particular, the question of complex systems provoked by far the most discussion.

Government Spending Watch - A New Initiative You Really Need to Know About

Duncan Green's picture

I’m consistently astonished by how little we know about the important stuff in development. Take the Millennium Development Goals – the basis for innumerable aid debates, campaigns, and negotiations. A large chunk of the MDG agenda concerns the size and quality of public spending – on health, education, water, sanitation etc. So obviously, the first thing we need is to know how much governments are spending on these things, right?

Well no actually, because we don’t have those numbers. Until now. Oxfam has teamed up with an influential and well-connected NGO, Development Finance International, which advises developing country governments around the world. Working with a network of government officials, DFI has pulled together and analysed the budgets of 52 low and middle income countries (With another 34 to follow). The result is a new database, called Government Spending Watch, (summary of overall project here) and a report ‘Progress at Risk’, previewed in Washington last Friday in a joint DFI/Oxfam America event to coincide with the IMF and World Bank Spring meetings. The full report won’t be ready ‘til May, but an initial draft exec sum is available, and here’s what it says.

The Political Implications of Evidence-Based Approaches (aka Start of This Week’s Wonkwar on the Results Agenda)

Duncan Green's picture

The debate on evidence and results continues to rage. Rosalind Eyben (left) and Chris Roche (right, dressed for battle), two of the organisers of April’s Big Push Forward conference on the Politics of  Evidence, kick off a discussion. Tomorrow Chris Whitty, DFID’s Director of Research and Evidence and Chief Scientific Adviser, and Stefan Dercon, its Chief Economist, respond.

Distinct from its more general usage of what is observed or experienced, ‘evidence’ has acquired a particular meaning relating to proof about ‘what works’, particularly through robust evidence from rigorous experimental trials. But no-one really believes that it is feasible for external development assistance to consist purely of ‘technical’ interventions. Most development workers do not see themselves as scientists in a laboratory, but more as reflective practitioners seeking to learn how to support locally generated transformative processes for greater equity and social justice. Where have these experimental approaches come from and what is at stake?

Lant Pritchett v the Randomistas on the Nature of Evidence - Is a Wonkwar Brewing?

Duncan Green's picture

Recently I had a lot of conversations about evidence. First, one of the periodic retreats of Oxfam senior managers reviewed our work on livelihoods, humanitarian partnership and gender rights. The talk combined some quantitative work (for example the findings of our new ‘effectiveness reviews’), case studies, and the accumulated wisdom of our big cheeses. But the tacit hierarchy of these different kinds of knowledge worried me – anything with a number attached had a privileged position, however partial the number or questionable the process for arriving at it. In contrast, decades of experience were not even credited as ‘evidence’, but often written off as ‘opinion’. It felt like we were in danger of discounting our richest source of insight – gut feeling.

In this state of discomfort, I went off for lunch with Lant Pritchett (right – he seems to have forgiven me for my screw-up of a couple of years ago). He’s a brilliant and original thinker and speaker on any number of development issues, but I was most struck by the vehemence of his critique of the RCT randomistas and the quest for experimental certainty. Don’t get me (or him) wrong, he thinks the results agenda is crucial in ‘moving from an input orientation to a performance orientation’ and set out his views as long ago as 2002 in a paper called ‘It pays to be ignorant’, but he sees the current emphasis on RCTs as an example of the failings of ‘thin accountability’ compared to the thick version.