When does evidence influence policy? This has become something of an existential question among technical experts and researchers. It is accompanied with frequent hand-wringing about lack of traction of their research with policy-makers. Governments on their part say “who says we don’t use evidence”? We use the right kind of evidence!
And, indeed many do! Take the case of Mexico. Its famed social program, Opportunidades, that among other things, gives cash incentives to eligible households to send their children to school and to take care of their health, has evolved over time, based on hard evidence. Mexico has also generated a policy culture of using evidence.
What do I know from being both on the technical side and the policy side?
Last year, in January the Government of Himachal Pradesh released a report – Scaling the Heights: Social Inclusion and Sustainable Development in Himachal Pradesh. It said, among other things, that while Himachal Pradesh had done really well on a number of poverty and social outcomes, its falling child sex ratios, the nutrition outcomes of its children and a fast aging population were likely to become policy challenges. Budget allocations based on the recommendations followed promptly.
Scaling the Heights allowed me to reflect on the question of what works.
- Motivation: The determination at the highest levels to effect broad reforms and social change, despite the fact that reforms are often political, is probably a first step. This may well be a truism but the case of regressive fuel subsidies is instructive: there is a huge body of evidence, showing that many of these subsidies benefit the non-poor, but only a few countries have taken steps to reform them. State entities engaging in a reform process also need to be confident enough to be benchmarked against comparators. In short, they should want to make a change; not just look good.
- Data: Not just the existence of good data, but the acceptance of policy-makers that the data are robust and representative is important. In the case of Scaling the Heights, we used national data, so there was no quibbling about data quality or reliability. Results of new qualitative field work were quickly discussed with government at all levels.
- Dialogue and trust: I often feel that their own objectivity is inflated by researchers who may take on a superior stance vis a vis governments. If, for example, they work behind closed doors and “save the best for the last”, it doesn’t really work. It generates mistrust and opacity. Ideally, analysis should be done together with policy-makers, or at least in frequent contact with them. Suffice it to say that trust between analysts and policy makers is the cornerstone of successful policy analysis. In this scenario, neither side claims the “truth”.
- Trade-offs: Some would argue that doing analysis with policy-makers can muddy the robustness of analysis. While it may well be a tradeoff between the best analysis and the most useful one, if the aim is to influence policy, then it’s a tradeoff worth making. If the aim is to publish, then this discussion is moot. In fact, many-a-time, it may not be a tradeoff at all, especially when the analysis is trying to untangle complex socio-economic and institutional issues.
Communication: Results that are easily communicable to a diverse technical and non-technical audience generate greater acceptability and foster broad public dialogue. Today, both face to face communication and online conversations can have huge impact on the policy process. In fact, the influencing agents may not be the researchers at all, but a section of the public that uses the policy analysis to pressure their policy-makers.
Scaling the Heights was intended as a “Poverty and Social Impact Analysis” (PSIA) – an approach that the World Bank and others use to prognosticate about the likely impact of policy reform. While social forecasting is more of an art than a science, a diagnostic goes a long way in helping the policy process.