In this final post (Chris Whitty and Stefan Dercon have opted not to write a second installment), Rosalind Eyben and Chris Roche reply to their critics. And now is your chance to vote  – but only if you’ve read all three posts, please.The comments  on this have been brilliant, and I may well repost some next week, when I’ve had a chance to process.
Let’s start with what we seem to agree upon:
- Unhappiness with ‘experts’ – or at least the kind that pat you patronizingly on the arm,
- The importance of understanding context and politics,
- Power and political institutions are generally biased against the poor,
- We don’t know much about the ability of aid agencies to influence transformational change,
- Mixed methods approaches to producing ‘evidence’ are important. And, importantly,
- We are all often wrong!
We suggest the principal difference between us seems to concern our assumptions about: how different kinds of change happen; what we can know about change processes; if how and when evidence from one intervention can practically be taken and sensibly used in another; and how institutional and political contexts then determine how evidence is then used in practice. This set of assumptions has fundamental importance for international development practice.
Firstly, we understand social change to be emergent and messy. Organised efforts to direct change confront the impossibility of any of us ever having a total understanding of all the sets of societal relationships and contested meanings that generate change and are in constant flux. New inter-relational processes are constantly being generated that in turn affect and change those already in existence. Complexity theory privileges a concern for process as much as goals and supports an approach that seeks to make a difference by working through relationships rather than focusing on narrowly defined pre-set projects and outcomes. It encourages being explicit about values and a concern for how an organisation’s intervention is judged by others, in particular by those that are meant to ultimately benefit, and the creation of effective feedback mechanisms  – including, but not limited to, those produced by high quality research.
At their best, development practitioners often have to surf the unpredictable realities of national politics, spotting opportunities supporting interesting new initiatives, acting like entrepreneurs  or searchers, rather than planners. They are keeping their eye on processes and looking to ride those waves that appear to be heading in the direction that matches their own agencies’ mission and values, and which can support local coalitions for change. On the contrary, assuming that development practitioners are in control and that change is predictable – as expressed through some of the demands of evidence-based planning approaches – prevent them from responding effectively to feedback in an often unpredictable and dynamic policy environment, and can, if badly managed, chain them to a desk. Ben Ramalingam’s blog site – Aid on the Edge of Chaos  – offers current insights on complexity thinking in development.
That it is relatively easier to eradicate rinderpest in cattle and build bridges than tackle police corruption or reduce violence against women is because the first are examples of what Dave Snowden  describes as complicated problems and the latter are complex – an effect of there being so many collaborators involved in non-routine interventions with absence of consensus among them. Such issues can’t be ‘solved’ like a Sudoku puzzle. In that respect, we were puzzled by Chris and Stefan’s two examples of what we would describe as complex issues. We found the first – the effect of political quotas for women in rural India – to be somewhat superficial and wondered why so little reference was made to the considerable number of studies from political sociology on the same topic that ask more probing questions and arguably provide more insightful understanding of what has been learnt in different contexts. The World Bank study on whether top-down large scale interventions can stimulate bottom-up participation was on the other hand puzzling for exposing myths that perhaps only World Bank staff had previously believed in, while ignoring the very considerable body of sociological and anthropological knowledge on this topic. It led us to wondering whether you need economists to find something out for it to be accepted as evidence. Perhaps that explains some of ‘the evidence-barren areas in development’………
Which brings us to the second set of assumptions about how we know and therefore what is judged as evidence. This is about more than pluralism and mixed methods, though we recognise that recent advances , in this case funded by DFID, are important. Let’s start by insisting that a criterion for rigorous research is that it should be explicit about its assumptions or world-view. We suggest that a weakness in many studies is that they usually focus solely on the methodological and procedural and render invisible their ‘philosophical plumbing’ . The evidence-based approaches that Stefan and Chris advocate are imposing a certain view of the world, just as our approaches do. Their claims to the contrary foreclose any possible discussion about the different intellectual traditions in interpreting reality. Theory invites argument and debate.
An interesting paper by Greenhalg and Russell  on evaluating health programmes notes how experimental approaches often ignore thetricky philosophical and political questions. Like the authors of that article, we take an approach that recognizes the partial (in both senses of the word) nature of our knowledge. How does this approach try to deal with unavoidable bias? Through seeking to use dialogic, democratic methods in which multiple perspectives and understandings of what is at stake are explored, and the use of multiple and hybrid approaches. The implications for practice are to be involved in mutual single and double-loop learning  and adaptation as you go along. This does not preclude specific studies commissioned from ‘experts’, but it is not they alone who should define the problem nor should they assume that only their kind of knowledge has validity for collective efforts to try to secure greater equity and social justice. Knowledge and power are bed-mates. Our critique of ‘expertise’ – the laboratory references are an extreme example of the trend – is that expertise often uses its power to ignore other ways of knowing and doing, something Chris and Stefan would seem to agree with. Might it be that some of these ways might prove to be pretty good at tackling police corruption or reducing violence against women?
This is where reflexivity comes in. Those of us working as practitioners, bureaucrats and scholar activists in international development cannot escape the contradiction that we are strategizing for social transformation from a position in a global institution – international development – that can and does sustain inequitable power relations, as much as it succeeds in changing them. Reflexive practice seeks to address these power inequities by recognizing that (a) many problems we seek to address are the products of human interaction – and some very important problems for people with less voice go ignored for that reason, and (b) even if people are in agreement about there being a problem, they will often offer multiple diagnoses for its existence, and thus of course (c) multiple solutions, which need to be debated democratically with different kinds of evidence, based on alternative ways of knowing, and having the space to be heard.
We are heartened to note that Chris and Stefan believe “that all actions by external actors will interact with political forces and vested interests” and that “in many of the settings where development actors want to make a difference, power and political institutions are biased against the poor”. We would therefore assume that a reflexive donor would recognise that their power and agenda need examination as much as anyone else’s.
Chris and Stefan suggest ‘the commitment to evidence has opened up the space fundamentally to challenge conventional, technical approaches to aid.’ We would agree, but it would seem that the exception to this is when it comes to addressing the power of donors such as DFID, being honest about the domestic political pressures  they are under, and assessing the possibility that their behaviour (including how evidence-based approaches are managerialised) may on occasions be undermining processes of development and social transformation. Is DFID drawing upon anthropologists or ethnographic researchers, as the Police in the UK have recently done,  to understand how its policies on, for example, results or value for money change behaviour in the agency, and its relationships with others?
To imply that we are suggesting that ‘it is not worth trying to provide the best and most rigorous evidence to those who need to make difficult decisions’ is simply a wilful mis-stating of our position. On the contrary we are arguing there is more ‘evidence’ out there than some seem to admit because their world view precludes seeing this as such. Where we in particular see the need for more evidence is about how the evidence-based and results agenda plays out in practice. How it affects the behaviour of development agencies and their staff as well as their ability to support the promotion of the kinds of transformational change which are likely to make a significant difference to the lives of people living in poverty and injustice. It is odd that those that argue for more evidence seem rather reluctant to admit that this is needed!
This is a debate we are keen to pursue further in the upcoming Big Push Forward conference on the Politics of Evidence .
This post first appeard on From Poverty to Power