Syndicate content

Evidence-based approaches

Avoiding perversions of evidence-informed decision-making

Suvojit Chattopadhyay's picture

Emanuel Migo giving a presentation in Garantung village, Palangkaraya, Central Kalimantan, Indonesia.How to avoid “We saw the evidence and made a decision…and that decision was: since the evidence didn’t confirm our priors, to try to downplay the evidence”

Before we dig into that statement (based-on-a-true-story-involving-people-like-us), we start with a simpler, obvious one: many people are involved in evaluations. We use the word ‘involved’ rather broadly. Our central focus for this post is people who may block the honest presentation of evaluation results.

In any given evaluation, there are several groups of organizations and people with stake in an evaluation of a program or policy. Most obviously, there are researchers and implementers. There are also participants. And, for much of the global development ecosystem, there are funders of the program, who may be separate from the funders of the evaluation. Both of these may work through sub-contractors and consultants, bringing yet others on board.

Our contention is that not all of these actors are currently, explicitly acknowledged in the current transparency movement in social science evaluation, with implications for the later acceptance and use of the results. The current focus is often on a contract between researchers and evidence consumers as a sign that, in Ben Olken’s terms, researchers are not nefarious and power (statistically speaking) -hungry (2015). To achieve its objectives, the transparency movement requires more than committing to a core set of analyses ex ante (through pre-analysis or commitment to analysis plans) and study registration.

To make sure that research is conducted openly at all phases, transparency must include engaging all stakeholders — perhaps particularly those that can block the honest sharing of results. This is in line with, for example, EGAP’s third research principle on rights to review and publish results. We return to some ideas of how to encourage this at the end of the blog.

Beyond the quest for "policy implications": Alternative options for applied development researchers

Humanity Journal's picture

This post, written by Michael Woolcock, is a contribution to an online symposium on the changing nature of knowledge production in fragile states. Be sure to read other entries by Deval Desai and Rebecca Tapscott and Lisa Denney and Pilar Domingo.

Indonesia fills out form on riceMy nomination for development’s ‘Most Insightful, Least Cited’ paper is Ariel Heryanto’s “The development of ‘development.'”[1] Originally written in Indonesian in the mid-1980s, Heryanto’s gem has been cited a mere 79 times (according to Google Scholar), even in its carefully-translated English incarnation. For me, this paper is so wonderful because it makes, in clear and clever ways, two key points that bear endless repetition, especially to today’s junior scholars. The first point is that inference from evidence is never self-evident: significance must always be interpreted through theory. Consider the seemingly obvious fact that the sun rises in the east every morning, he writes. What could be more universally and unambiguously true? The problem, of course, is that the sun does not rise in the east; instead, despite every piece of sensory evidence to the contrary, the earth rotates counterclockwise on its axis and revolves around a stationary sun, making it appear as ifthe sun rises in the east. But we only know this – or, more accurately, claim to know this – because today we happen to have a theory, itself based on more complex forms of observation and theory, that helps us interpret the prevailing evidence, to reconcile it with evidence from analyses of other cosmic phenomena, and thus draw broadly coherent conclusions and inferences.

Heryanto’s second key point is that we are all captives of language, of the limits of any given tongue to convey the subtleties of complex issues. From this premise he proceeds to unpack the clumsy, alluring yet powerful word that in English we call ‘development’, noting that in Indonesian there are at least two very different interpretations of its meaning, and with this, two very different words – perkembangan and pembangunan – connoting two very different teleologies and policy agendas: the former a natural, ‘organic’ process akin to flowers blooming (“software”); the latter to an overt, intentional and ‘constructed’ political project of nation building (“hardware”). When translated into English, however, both perkembangan and pembangunan are typically rendered simply as “development,” thereby collapsing into a singular popular conception what in Indonesian discourse is a distinctly pluralist one. In the opening week of my class at the Kennedy School, which typically has 50 students who between them speak around 30 languages, we begin with a lively discussion of what “development” means in Arabic, Hindi, French, Turkish, Spanish, Swahili, Swedish… It turns out to mean all sorts of things.[2]

I open this way because I think the next article we need in this “genre” – though hopefully one that quickly transcends it because it is both highly insightful and highly cited! – is something akin to what Desai and Tapscott have begun with their ‘Tomayto Tomahto’ paper. In short, echoing Heryanto, we need more development research on development research. Such scholarship, however, would go beyond providing a mere chronology of changing professional styles, methodological emphases and funding characteristics (scale, sources, time horizons, expectations) to explanations of how and why such changes have occurred. Such explanations would be grounded in analyses of the shifting historical experiences and geo-political imperatives different generations of researchers have sought to accommodate, the particular ideas these experiences and imperatives rendered normative, and the concomitant gains and losses these changes have entailed for those finding themselves managing the “trade-offs” (such as they are) between scholarly independence and public utility.

Turning the gaze on ourselves: Acknowledging the political economy of development research

Humanity Journal's picture

This post by Lisa Denney and Pilar Domingo is a contribution to an online symposium from Humanity Journal on the changing nature of knowledge production in fragile states. Be sure to read other entries, beginning with Deval Desai and Rebecca Tapscott's piece.

IBM Research - Africa Scientists at Riara School, NairobiWhile researchers (ourselves included) now consistently underline the importance of understanding the political economy of developing countries and donors that support them in order to achieve better aid outcomes, the research industry remains largely ambivalent about questions of our own political economy. Desai and Tapscott’s paper is therefore a refreshing attempt to start unpacking this and the ways in which ‘evidence’ is produced within the development industry.

Here, we offer reflections on three stages of this process: building evidence, translating evidence and dislodging evidence. But a word of caution is also merited upfront. The fact that we are talking about “evidence,” rather than research, is itself telling and underscores a shift in the development industry in the last ten years. Speaking about ‘evidence’ rather than about “research” suggests something much more concrete and indisputable. Evidence is taken as proof. But surely research is also debate. While there are, of course, things for which largely indisputable evidence can be found (the effects of vaccines on disease, for instance), the use of this terminology, particularly in the social sciences where little is concrete or universal, suggests that final answers are discoverable. It can, thus, be used to close down debate, as much as to encourage it. Research, on the other hand, recognizes that most findings are contributions to knowledge that helpfully allow to move us towards deeper understanding and greater awareness but do not claim to be the final word on a given topic.
 

Rethinking research: Systemic approaches to the ethics and politics of knowledge production in fragile states

Humanity Journal's picture

Classroom in MaliRecently, Humanity, a peer-reviewed academic journal from the University of Pennsylvania, has been hosting an online symposium on the changing nature of knowledge production in fragile states. In light of the intensification of evidence-based policymaking and the “data revolution” in development, the symposium asked what the ethical and political implications are for qualitative research as a tool of governance.

We are presenting their articles in the coming days to share the authors' thoughts with the People, Spaces, Deliberation community and generate further discussion.

The symposium will begin tomorrow with a short paper from Deval Desai and Rebecca Tapscott, followed by responses during the coming weeks from Lisa Denney and Pilar Domingo (ODI); Michael Woolcock (World Bank); Morten Jerven (Norwegian University of Life Sciences and Simon Fraser University); Alex de Waal (World Peace Foundation); and Holly Porter (LSE). We hope that you enjoy the symposium and participate in the debate!

Humanitarian broadcasting in emergencies

Theo Hannides's picture

A recording of BBC Media Action’s ‘Milijuli Nepali’ (Together Nepal)It is several days after the earthquake in Nepal. A small group of Nepali women sit on the side of the road in a village in Dhading district, 26 kilometres from Kathmandu. In this village, many people lost their homes and several died in the earthquake.

The women are listening attentively to a radio programme, Milijuli Nepali meaning ‘Together Nepal’. After it finishes, one of the women starts asking the others questions: What did they think to the programme? Did they learn anything? What else would they like to hear to help them cope in the aftermath of the earthquake? The women start discussing some of the issues raised around shelter and hygiene, they like the creative ideas suggestions, particularly as they comes from a source they like and trust - the BBC.  They give the researcher their ideas for future programmes and she writes them down.

BBC Media Action’s ‘Milijuli Nepali’ (Together Nepal)

Thinking about stakeholder risk and accountability in pilot experiments

Heather Lanthorn's picture

ACT malaria medicationHeather Lanthorn describes the design of the Affordable Medicines Facility- malaria, a financing mechanism for expanding access to antimalarial medication, as well as some of the questions countries faced as they decided to participate in its pilot, particularly those related to risk and reputation.

I examine, in my never-ending thesis, the political-economy of adopting and implementing a large global health program, the Affordable Medicines Facility – malaria or the “AMFm”. This program was designed at the global level, meaning largely in Washington, DC and Geneva, with tweaking workshops in assorted African capitals. Global actors invited select sub-Saharan African countries to apply to pilot the AMFm for two years before any decision would be made to continue, modify, scale-up, or terminate the program. One key point I make is that implementing stakeholders see pilot experiments with uncertain follow-up plans as risky: they take time and effort to set-up and they often have unclear lines of accountability, presenting risk to personal, organizational, and even national reputations. This can lead to stakeholder resistance to being involved in experimental pilots.

It should be noted from the outset that it was not fully clear what role the evidence from the pilot would play in the board’s decision or how the evidence would be interpreted. As I highlight below, this lack of clarity helped to foster feelings of risk as well as a resistance among some of the national-level stakeholders about participating in the pilot. Several critics have noted that the scale and scope and requisite new systems and relationships involved in the AMFm disqualify it from being considered a ‘pilot,’ though I use that term for continuity with most other AMFm-related writing.
 
In my research, my focus is on the national and sub-national processes of deciding to participate in the initial pilot (‘phase I’) stage, focusing specifically on Ghana. Besides being notable for the project scale and resources mobilized, one thing that stood out about this project is that there was a reasonable amount of resistance to piloting this program among stakeholders in several of the invited countries. I have been lucky and grateful that a set of key informants in Ghana, as well as my committee and other reviewers, have been willing to converse openly with me over several years as I have tried to untangle the reasons behind the support and resistance and to try to get the story ‘right’.

Social Marketing Master Class: The Importance of Evaluation

Roxanne Bauer's picture

Social marketing emerged from the realization that marketing principles can be used not just to sell products but also to "sell" ideas, attitudes and behaviors.  The purpose of any social marketing program, therefore, is to change the attitudes or behaviors of a target population-- for the greater social good.

In evaluating social marketing programs, the true test of effectiveness is not the number of flyers distributed or public service announcements aired but how the program impacted the lives of people.

Rebecca Firestone, a social epidemiologist at PSI with area specialties in sexual and reproductive health and non-communicable diseases, walks us through some best practices of social marketing and offers suggestions for improvement in the future.  Chief among her suggestions is the need for more and better evaluation of social marketing programs.


Social Marketing Master Class: The Importance of Evaluation

Theories of Change, Stakeholders, Imagined Beneficiaries, & Stealing from Product Design. That is, Meet ‘Mary.’

Heather Lanthorn's picture

I have been thinking a lot about ‘theories of change’ this week (as I was here).  Actually, I have been thinking more about ‘conceptual models,’ which was the term by which I was first introduced to the general idea* and the term I still prefer because it implies more uncertainty and greater scope for tinkering than does ‘theory.’ (I accept that ‘theory of change’ has been branded and that I have to live with it, but I don’t have to like it.)

Regardless of the term, the approach of thinking seriously about how behavioral, social and economic change happens is important but often overlooked during the planning stages of projects/programs/policies and linked evaluations. Moreover, they are glossed over in the analysis and reporting stages, left to academic speculation in the discussion section of an evaluation paper and not informed by talking systematically to those people who were intended to benefit from the program.

I think there is growing recognition that building a theory of change is something that should happen, at least in part, backwards (among other places where this is discussed is in ‘evidence-based policy’ with the idea of a ‘pre-mortem‘ and ‘thinking step-by-step and thinking backwards‘).  That is, you start with the end goal, usually some variant of ‘peace,’ ‘satisfaction,’ ‘wellbeing,’ ‘capabilities,’** etc., in mind and work backwards as to how you are going to get there from here.

‘Relevant Reasons’ in Decision-Making (3 of 3)

Heather Lanthorn's picture

This is the third in our series of posts on evidence and decision-making; also posted on Heather’s blog. Here are Part 1 and Part 2
***
In our last post, we wrote about factors – evidence and otherwise – influencing decision-making about development programmes. To do so, we have considered the premise of an agency deciding whether to continue or scale a given programme after piloting it and including an accompanying evaluation commissioned explicitly to inform that decision. This is a potential ‘ideal case’ of evidence-informed decision-making. Yet, the role of evidence in informing decisions is often unclear in practice.

What is clear is that transparent parameters for making decisions about how to allocate resources following a pilot may improve the legitimacy of those decisions. We have started, and continue in this post, to explore whether decision-making deliberations can be shaped ex ante so that, regardless of the outcome, stakeholders feel it was arrived at fairly. Such pre-commitment to the process of deliberation could carve out a specific role for evidence in decision-making. Clarifying the role of evidence would inform what types of questions decision-makers need answered and with what kinds of data, as we discussed here.

Have Evidence, Will… Um, Erm (2 of 2)

Heather Lanthorn's picture

This is the second in a series of posts with suvojit, initially planned as a series of two but growing to six…

Reminder: The Scenario
In our last post, we set up a scenario that we* have both seen several times: a donor or large implementing agency (our focus, though we think our arguments apply to governmental ministries) commissions an evaluation, with explicit (or implicit) commitments to ‘use’ the evidence generated to drive their own decisions about continuing/scaling/modifying/scrapping a policy/program/project.

And yet. the role of evidence in decision-making of this kind is unclear.

In response, we argued for something akin to Patton’s utilisation-focused evaluation. Such an approach assesses the “quality” or “rigor” of evidence by considering how well it addresses the questions and purposes needed for decision-making with the most appropriate tools and timings to facilitate decision-making in particular political-economic moment, including the capacity of decision-makers to act on evidence.


Pages