Syndicate content

Add new comment

What is the Evidence for Evidence-Based Policy Making? Pretty Thin, Actually

Duncan Green's picture

A recent conference in Nigeria considered the evidence that evidence-based policy-making actually, you know, exists. The conference report sets out its theory of change in a handy diagram – the major conference sessions are indicated in boxes.

Conclusion?

‘There is a shortage of evidence on policy makers’ actual capacity to use research evidence and there is even less evidence on effective strategies to build policy makers’ capacity. Furthermore, many presentations highlighted the insidious effect of corruption on use of evidence in policy making processes.

i.e. you can have all the arguments you like on the nature of evidence – disciplinary and political bias, what constitutes knowledge etc etc (as this blog recently did), but policy makers are often either unable or unwilling to use it anyway – supply doesn’t guarantee demand.

The aid agencies and research councils that fund research are very keen to promote this shift from worrying about supply to wondering how to boost demand (although the researchers are often less keen – they just want to be left alone to churn out papers and develop their careers). What was nice about this conference was the amount of on-the-ground grassroots research on how decision makers actually use (or more often ignore) research in places like Nigeria (‘political manipulation and ambition seem to be among the strongest determinants of factors influencing policy development processes’) and Indonesia (‘Even if technocratic or political – it doesn’t matter – it’s 90% personality’).

One thing I learned is that agonising over per diems is not confined to the aid business:

‘One particularly heated debate concerned the frequent requests from policy makers for ‘sitting fees’ in order to attend training or seminars which could inform them about research issues. Participants agreed that this practice is widespread in most of the African countries represented; however, opinions on how to respond to this differed. Some suggested that those who aim to inform policy makers about research need to just accept that paying these fees is necessary and should therefore include them in their budgets. However others felt that continuing to pay such fees just propagates the problem and that those funding research communication and uptake work should take a ‘zero-tolerance’ approach.’

On the demand side, the report considers both capacity and incentives. On capacity ‘most people don’t know what they don’t know!’ will resonate with researchers in NGOs trying to convince their colleagues to look harder at the evidence. There’s a mountain to climb: a survey of Zambian parliamentary researchers and librarians (and these had positively agreed that they needed to use research) found that ‘only one in three believed there was consensus that the CIA did not invent HIV’.

‘Research-evidence is often used opportunistically to back up pre-existing political decisions/opinions (confirmation bias)’. That preference for policy-based evidence-making is alive and well in the big aid donors and NGOs too, of course……..

And unfortunately, research from Ghana, Sierra Leone, Uganda, and Zambia concluded that ‘a lack of capacity to understand research was perceived as beneficial to policy makers since it ‘allowed’ them to ignore evidence and instead follow their own agenda. Thus, there is not only a lack of capacity but also a disincentive to build capacity.’ Oh dear.

How to build the incentives to use research, assuming these political obstacles are not insuperable? On HIV policy in Pakistan, DFID ‘built the capacity of civil society organisations representing marginalised groups to demand policy change’.

Other useful tips:

  • including policy makers in the design phase of research projects (get them on the advisory board, guys, don’t just see them as seminar fodder once you’ve finished the research)
  • networks and linkages between researchers and policy makers are necessary but definitely not sufficient
  • researchers need to change the (often dire) way they communicate their work – in one case study from Ghana ‘photographs of real people suffering from mental illness is far more powerful in influencing opinions than any policy brief could be.’ (Well duh)
  • target the ‘policy entrepreneurs’ with influence over decision-makers (the Minister’s old university professor etc)
  • ‘There is a tendency for researchers and research intermediaries to focus their communication efforts on elected representatives and appointed officials but to ignore the crucial role that technocratic staff play.’

All good stuff, but the report reminded me of the governance debates of a few years ago, in that even though it recognized the problem is incentives and politics, kept drifting back to the comfort zone of supply issues (if they don’t want research, we just have to get better at communicating or building their capacity), rather than thinking harder about the demand side. For example:

  • Anyone involved in advocacy knows that the openness of policy makers to new ideas is episodic, and linked to things like changes of administration, scandals, crises and failures. So how does research need to be redesigned to capitalise on such brief windows of opportunity?
  • Opposition parties are often much less well resourced, and much more malleable in their thinking as they cast around for clever ideas that will help them win power – to what extent should researchers concentrate on those without power, rather than those currently in office?
  • Young minds are (generally) more open to new ideas than old ones: should researchers target future leaders (who are pretty easy to identify by faculty and university) rather than waste their time on the current generation?

The evidence debate, you won’t be surprised to hear, continues……

This post first appeared on From Poverty to Power

Follow PublicSphereWB on Twitter