Syndicate content

evaluation

From method to market: Some thoughts on the responses to "Tomayto tomahto"

Humanity Journal's picture

In this final post, Deval Desai and Rebecca Tapscott respond to comments by Lisa Denney and Pilar Domingo, Michael WoolcockMorten Jerven, Alex de Waal, and Holly Porter.

Paktika Youth Shura Our paper, Tomayto Tomahto, is in essence an exhortation and an ethical question. The exhortation: treat and unpack fragility research (for we limit our observations to research conducted for policy-making about fragile and conflict-affected places) as an institution of global governance, a set of complex social processes and knowledge practices that produce evidence as part of policy-making. The ethical question: all institutions contain struggles over the language and rules by which they allocate responsibility between individual actors (ethics) and structural factors (politics) for their effects—this might be law, democratic process, religious dictate. In light of the trends of saturation and professionalization that we identify (and as Jerven astutely points out in his response, a profound intensification of research), is it still sufficient to allocate responsibility for the effects of fragility research using the language and rules of method?

The five responses to our piece enthusiastically take up the exhortation. A series of positions are represented: the anthropologist (Porter), the applied development researcher (Denney and Domingo), the anthropologist/practitioner (DeWaal), the practitioner/sociologist (Woolcock), and the economist (Jerven). They unpack the profoundly socio-political nature of the relationship between research and policy from a number of different perspectives: Porter’s intimate view from the field, Jerven’s sympathetic ear in the statistics office, Woolcock and Denney and Domingo’s feel for the alchemic moments when research turns into policy at the global level, and de Waal’s distaste for the global laboratories in which those moments occur, preferring the local re-embedding of research. These all, of course, spatialize the research-policy nexus, just as we do; however, all then ask us to privilege one space over the others.

Avoiding perversions of evidence-informed decision-making

Suvojit Chattopadhyay's picture

Emanuel Migo giving a presentation in Garantung village, Palangkaraya, Central Kalimantan, Indonesia.How to avoid “We saw the evidence and made a decision…and that decision was: since the evidence didn’t confirm our priors, to try to downplay the evidence”

Before we dig into that statement (based-on-a-true-story-involving-people-like-us), we start with a simpler, obvious one: many people are involved in evaluations. We use the word ‘involved’ rather broadly. Our central focus for this post is people who may block the honest presentation of evaluation results.

In any given evaluation, there are several groups of organizations and people with stake in an evaluation of a program or policy. Most obviously, there are researchers and implementers. There are also participants. And, for much of the global development ecosystem, there are funders of the program, who may be separate from the funders of the evaluation. Both of these may work through sub-contractors and consultants, bringing yet others on board.

Our contention is that not all of these actors are currently, explicitly acknowledged in the current transparency movement in social science evaluation, with implications for the later acceptance and use of the results. The current focus is often on a contract between researchers and evidence consumers as a sign that, in Ben Olken’s terms, researchers are not nefarious and power (statistically speaking) -hungry (2015). To achieve its objectives, the transparency movement requires more than committing to a core set of analyses ex ante (through pre-analysis or commitment to analysis plans) and study registration.

To make sure that research is conducted openly at all phases, transparency must include engaging all stakeholders — perhaps particularly those that can block the honest sharing of results. This is in line with, for example, EGAP’s third research principle on rights to review and publish results. We return to some ideas of how to encourage this at the end of the blog.

Beyond the quest for "policy implications": Alternative options for applied development researchers

Humanity Journal's picture

This post, written by Michael Woolcock, is a contribution to an online symposium on the changing nature of knowledge production in fragile states. Be sure to read other entries by Deval Desai and Rebecca Tapscott and Lisa Denney and Pilar Domingo.

Indonesia fills out form on riceMy nomination for development’s ‘Most Insightful, Least Cited’ paper is Ariel Heryanto’s “The development of ‘development.'”[1] Originally written in Indonesian in the mid-1980s, Heryanto’s gem has been cited a mere 79 times (according to Google Scholar), even in its carefully-translated English incarnation. For me, this paper is so wonderful because it makes, in clear and clever ways, two key points that bear endless repetition, especially to today’s junior scholars. The first point is that inference from evidence is never self-evident: significance must always be interpreted through theory. Consider the seemingly obvious fact that the sun rises in the east every morning, he writes. What could be more universally and unambiguously true? The problem, of course, is that the sun does not rise in the east; instead, despite every piece of sensory evidence to the contrary, the earth rotates counterclockwise on its axis and revolves around a stationary sun, making it appear as ifthe sun rises in the east. But we only know this – or, more accurately, claim to know this – because today we happen to have a theory, itself based on more complex forms of observation and theory, that helps us interpret the prevailing evidence, to reconcile it with evidence from analyses of other cosmic phenomena, and thus draw broadly coherent conclusions and inferences.

Heryanto’s second key point is that we are all captives of language, of the limits of any given tongue to convey the subtleties of complex issues. From this premise he proceeds to unpack the clumsy, alluring yet powerful word that in English we call ‘development’, noting that in Indonesian there are at least two very different interpretations of its meaning, and with this, two very different words – perkembangan and pembangunan – connoting two very different teleologies and policy agendas: the former a natural, ‘organic’ process akin to flowers blooming (“software”); the latter to an overt, intentional and ‘constructed’ political project of nation building (“hardware”). When translated into English, however, both perkembangan and pembangunan are typically rendered simply as “development,” thereby collapsing into a singular popular conception what in Indonesian discourse is a distinctly pluralist one. In the opening week of my class at the Kennedy School, which typically has 50 students who between them speak around 30 languages, we begin with a lively discussion of what “development” means in Arabic, Hindi, French, Turkish, Spanish, Swahili, Swedish… It turns out to mean all sorts of things.[2]

I open this way because I think the next article we need in this “genre” – though hopefully one that quickly transcends it because it is both highly insightful and highly cited! – is something akin to what Desai and Tapscott have begun with their ‘Tomayto Tomahto’ paper. In short, echoing Heryanto, we need more development research on development research. Such scholarship, however, would go beyond providing a mere chronology of changing professional styles, methodological emphases and funding characteristics (scale, sources, time horizons, expectations) to explanations of how and why such changes have occurred. Such explanations would be grounded in analyses of the shifting historical experiences and geo-political imperatives different generations of researchers have sought to accommodate, the particular ideas these experiences and imperatives rendered normative, and the concomitant gains and losses these changes have entailed for those finding themselves managing the “trade-offs” (such as they are) between scholarly independence and public utility.

Turning the gaze on ourselves: Acknowledging the political economy of development research

Humanity Journal's picture

This post by Lisa Denney and Pilar Domingo is a contribution to an online symposium from Humanity Journal on the changing nature of knowledge production in fragile states. Be sure to read other entries, beginning with Deval Desai and Rebecca Tapscott's piece.

IBM Research - Africa Scientists at Riara School, NairobiWhile researchers (ourselves included) now consistently underline the importance of understanding the political economy of developing countries and donors that support them in order to achieve better aid outcomes, the research industry remains largely ambivalent about questions of our own political economy. Desai and Tapscott’s paper is therefore a refreshing attempt to start unpacking this and the ways in which ‘evidence’ is produced within the development industry.

Here, we offer reflections on three stages of this process: building evidence, translating evidence and dislodging evidence. But a word of caution is also merited upfront. The fact that we are talking about “evidence,” rather than research, is itself telling and underscores a shift in the development industry in the last ten years. Speaking about ‘evidence’ rather than about “research” suggests something much more concrete and indisputable. Evidence is taken as proof. But surely research is also debate. While there are, of course, things for which largely indisputable evidence can be found (the effects of vaccines on disease, for instance), the use of this terminology, particularly in the social sciences where little is concrete or universal, suggests that final answers are discoverable. It can, thus, be used to close down debate, as much as to encourage it. Research, on the other hand, recognizes that most findings are contributions to knowledge that helpfully allow to move us towards deeper understanding and greater awareness but do not claim to be the final word on a given topic.
 

Tomayto tomahto: The research supply chain and the ethics of knowledge production

Humanity Journal's picture

Pre-test of Rural Household Survey, PakistanThis post is the first in a symposium from Humanity Journal on the changing nature of knowledge production in fragile states. It was written by Deval Desai, a Research Associate at ODI, and Rebecca Tapscott, a PhD Candidate at the Fletcher School at Tufts University.

Aid in the 21st century is increasingly evidence-driven. Between 2000 and 2006, the World Bank spent a total of $630 million on research. By 2011 the World Bank was spending $606 million per year, or about a quarter of its country budgets. In September of this year, by signing up to the Sustainable Development Goals, the global community enshrined a commitment to “increase significantly” a range of high-quality data over the next 15 years, to facilitate qualitative as well as quantitative understandings of growth and progress.

As the international community seeks to tackle the “hard problems” of development—fragility, conflict, endemic poverty—qualitative research is ever-more important. These problems are not amenable to best-practice solutions but must be tackled through deep contextual understanding of their drivers. Or so the policy story goes.[1] As a result, conducting qualitative research today is different from the days when Geertz set out for Bali. Gone are the intrepid individuals setting off to explore and explain an untouched environment, unaware of the demands of policymakers.[2]

We argue that while practice has changed, the ideology of qualitative research has not. Qualitative research is generally understood as the individual exercise of research methods to produce knowledge about the world, knowledge that can then be taken up by governance actors of all stripes. By contrast, we believe that today we must understand research as asystemic intervention, within the broader context of globalization and international development. Therefore, we should start with the political economy of contemporary research—an iterative, professionalized and increasingly saturated practice—to rethink the political and ethical implications of the research that we do.

As a first step to this end, we contrast two stylized frameworks for understanding qualitative research in fragile contexts: The “fragility research” framework, which we argue dominates the current debate; and the “research supply chain” framework, which we offer as a new framework and a provocation to discussion. We discuss each in turn, first considering how fragility research frames knowledge production in fragile or conflicted-affected states, identifying some assumptions the fragility research framework rests on, and critiquing some of its key conclusions. We then discuss the research supply chain as an alternative framework to explore the relationship between knowledge generation and policy. Finally, we raise some questions based on the new framework’s implications.

Rethinking research: Systemic approaches to the ethics and politics of knowledge production in fragile states

Humanity Journal's picture

Classroom in MaliRecently, Humanity, a peer-reviewed academic journal from the University of Pennsylvania, has been hosting an online symposium on the changing nature of knowledge production in fragile states. In light of the intensification of evidence-based policymaking and the “data revolution” in development, the symposium asked what the ethical and political implications are for qualitative research as a tool of governance.

We are presenting their articles in the coming days to share the authors' thoughts with the People, Spaces, Deliberation community and generate further discussion.

The symposium will begin tomorrow with a short paper from Deval Desai and Rebecca Tapscott, followed by responses during the coming weeks from Lisa Denney and Pilar Domingo (ODI); Michael Woolcock (World Bank); Morten Jerven (Norwegian University of Life Sciences and Simon Fraser University); Alex de Waal (World Peace Foundation); and Holly Porter (LSE). We hope that you enjoy the symposium and participate in the debate!

Humanitarian broadcasting in emergencies

Theo Hannides's picture

A recording of BBC Media Action’s ‘Milijuli Nepali’ (Together Nepal)It is several days after the earthquake in Nepal. A small group of Nepali women sit on the side of the road in a village in Dhading district, 26 kilometres from Kathmandu. In this village, many people lost their homes and several died in the earthquake.

The women are listening attentively to a radio programme, Milijuli Nepali meaning ‘Together Nepal’. After it finishes, one of the women starts asking the others questions: What did they think to the programme? Did they learn anything? What else would they like to hear to help them cope in the aftermath of the earthquake? The women start discussing some of the issues raised around shelter and hygiene, they like the creative ideas suggestions, particularly as they comes from a source they like and trust - the BBC.  They give the researcher their ideas for future programmes and she writes them down.

BBC Media Action’s ‘Milijuli Nepali’ (Together Nepal)

Hawthorne effects: Past and future

Heather Lanthorn's picture

Maseru Shining Centuary TextilesI have two main points in this blog. The first is a public service announcement in the guise of history. Not so long ago, I heard someone credit the Hawthorne effect to an elusive, eponymous Dr. Hawthorne, of which, in this case, there is not one directly tied to these studies. The second is a call to expand our conception of Hawthorne effects – or really, observer or evaluator effects – in the practice of social science monitoring and evaluation.
 
Hawthorne history

The Hawthorne effect earned its name from the factory in which the study was sited: the Western Electric Company’s Hawthorne plant, near Chicago. These mid-1920s studies, carried out by MIT, Harvard, and the US National Research Council researchers were predicated on in-vogue ideas related to scientific management. Specifically, the researchers examined the effect of artificial illumination on worker productivity, raising and lowering the artificial light available to the women assembling electric relays (winding coils of wire) in a factory until the artificial light available was equivalent to moonlight.
 
The finding that made social science history (first in the nascent fields of industrial and organizational psychology and slowly trickling out from there) was that worker productivity increased when the amount of light was changed, and productivity decreased when the study ended. It was then suggested that the workers’ productivity increased because of the attention paid to them via the study, not because the light was altered.

Thus, the “Hawthorne effect” was named and acknowledged: the change in an outcome that can be attributed to behavioral responses among subjects/respondents/beneficiaries simply by virtue of being observed as part of an experiment or evaluation.

What do we know about the long-term legacy of aid programmes? Very little, so why not go and find out?

Duncan Green's picture

We talk a lot in the aid biz about wanting to achieve long-term impact, but most of the time, aid organizations work in a time bubble set by the duration of a project. We seldom go back a decade later and see what happened after we left. Why not?

Orphaned and homeless children being given a non-formal education at a school in IndiaEveryone has their favourite story of the project that turned into a spectacular social movement (SEWA) or produced a technological innovation (M-PESA) or spun off a flourishing new organization (New Internationalist, Fairtrade Foundation), but this is all cherry-picking.  What about something more rigorous:  how would you design a piece of research to look at the long term impacts across all of our work? Some initial thoughts, but I would welcome your suggestions:

One option would be to do something like our own Effectiveness Reviews,  but backdated – take a random sample of 20 projects from our portfolio in, say, 2005, and then design the most rigorous possible research to assess their impact.

There will be some serious methodological challenges to doing that, of course. The further back in time you go, the more confounding events and players will have appeared in the interim, diluting attribution like water running into sand. If farming practices are more productive in this village than a neighbour, who’s to say it was down to that particular project you did a decade ago? And anyway, if practices have been successful, other communities will probably have noticed – how do you allow for positive spillovers and ripple effects? And those ripple effects could have spread much wider – to government policy, or changes in attitudes and beliefs.
 

Research questions about technology use in education in developing countries

Michael Trucano's picture
let's investigate this systematically ...
let's investigate this systematically ...

Back in 2005, I helped put together a 'quick guide to ICT and education challenges and research questions' in developing countries. This list was meant to inform a research program at the time sponsored by the World Bank's infoDev program, but I figured I'd make it public, because the barriers to publishing were so low (copy -> paste -> save -> upload) and in case doing so might be useful to anyone else.

While I don't know to what extent others may have actually found this list helpful, I have seen this document referenced over the years in various funding proposals, and by other funding agencies. Over the past week I've (rather surprisingly) heard two separate organizations reference this rather old document in the course of considering some of their research priorities going forward related to investigating possible uses of information and communication technologies (ICTs) to help meet educational goals in low income and middle countries around the world, and so I wondered how these 50 research questions had held up over the years.

Are they still relevant?

And:

What did we miss, ignore or not understand?

The list of research questions to be investigated going forward was a sort of companion document to Knowledge maps: What we know (and what we don't) about ICT use in education in developing countries. It was in many ways a creature of its time and context. The formulation of the research questions identified was in part influenced by some stated interests of the European Commission (which was co-funding some of the work) and I knew that some research questions would resonate with other potential funders at the time (including the World Bank itself) who were interested in related areas (see, for example, the first and last research questions). The list of research questions was thus somewhat idiosyncratic, did not presume to be comprehensive in its treatment of the topic, and was not intended nor meant to imply that certain areas of research interest were 'more important' than others not included on the list.

That said, in general the list seems to have held up quite well, and many of the research questions from 2005 continue to resonate in 2015. In some ways, this resonance is unfortunate, as it suggests that we still don't know answers to a lot of very basic questions. Indeed, in some cases we may know as little in 2015 as we knew in 2015, despite the explosion of activity and investment (and rhetoric) in exploring the relevance of technology use in education to help meet a wide variety of challenges faced by education systems, communities, teachers and learners around the world. This is not to imply that we haven't learned anything, of course (an upcoming EduTech blog post will look at two very useful surveys of research findings that have been published in the past year), but that we still have a long way to go.
 

Some comments and observations,
with the benefit of hindsight and when looking forward

The full list of research questions from 2005 is copied at the bottom of this blog post (here's the original list as published, with explanation and commentary on individual items).

Reviewing this list, a few things jump out at me:


Pages