Syndicate content

measurement

From method to market: Some thoughts on the responses to "Tomayto tomahto"

Humanity Journal's picture

In this final post, Deval Desai and Rebecca Tapscott respond to comments by Lisa Denney and Pilar Domingo, Michael WoolcockMorten Jerven, Alex de Waal, and Holly Porter.

Paktika Youth Shura Our paper, Tomayto Tomahto, is in essence an exhortation and an ethical question. The exhortation: treat and unpack fragility research (for we limit our observations to research conducted for policy-making about fragile and conflict-affected places) as an institution of global governance, a set of complex social processes and knowledge practices that produce evidence as part of policy-making. The ethical question: all institutions contain struggles over the language and rules by which they allocate responsibility between individual actors (ethics) and structural factors (politics) for their effects—this might be law, democratic process, religious dictate. In light of the trends of saturation and professionalization that we identify (and as Jerven astutely points out in his response, a profound intensification of research), is it still sufficient to allocate responsibility for the effects of fragility research using the language and rules of method?

The five responses to our piece enthusiastically take up the exhortation. A series of positions are represented: the anthropologist (Porter), the applied development researcher (Denney and Domingo), the anthropologist/practitioner (DeWaal), the practitioner/sociologist (Woolcock), and the economist (Jerven). They unpack the profoundly socio-political nature of the relationship between research and policy from a number of different perspectives: Porter’s intimate view from the field, Jerven’s sympathetic ear in the statistics office, Woolcock and Denney and Domingo’s feel for the alchemic moments when research turns into policy at the global level, and de Waal’s distaste for the global laboratories in which those moments occur, preferring the local re-embedding of research. These all, of course, spatialize the research-policy nexus, just as we do; however, all then ask us to privilege one space over the others.

#3 from 2015: Have the MDGs affected developing country policies and spending? Findings of new 50 country study.

Duncan Green's picture
Our Top Ten blog posts by readership in 2015. This post was originally posted on August 20, 2015.

Portrait of childrenOne of the many baffling aspects of the post-2015/Sustainable Development Goal process is how little research there has been on the impact of their predecessor, the Millennium Development Goals. That may sound odd, given how often we hear ‘the MDGs are on/off track’ on poverty, health, education etc, but saying ‘the MDG for poverty reduction has been achieved five years ahead of schedule’ is not at all the same as saying ‘the MDGs caused that poverty reduction’ – a classic case of confusing correlation with causation.

So I gave heartfelt thanks when Columbia University’s Elham Seyedsayamdost got in touch after a previous whinge on this topic, and sent me her draft paper for UNDP which, as far as I know, is the first systematic attempt to look at the impact of the MDGs on national government policy. Here’s the abstract, with my commentary in brackets/italics. The full paper is here: MDG Assessment_ES, and Elham would welcome any feedback (es548[at]columbia[dot]edu):

"This study reviews post‐2005 national development strategies of fifty countries from diverse income groups, geographical locations, human development tiers, and ODA (official aid) levels to assess the extent to which national plans have tailored the Millennium Development Goals to their local contexts. Reviewing PRSPs and non‐PRSP national strategies, it presents a mixed picture." [so it’s about plans and policies, rather than what actually happened in terms of implementation, but it’s still way ahead of anything else I’ve seen]

"The encouraging finding is that thirty-two of the development plans under review have either adopted the MDGs as planning and monitoring tools or “localized” them in a meaningful way, using diverse adaptation strategies from changing the target date to setting additional goals, targets and indicators, all the way to integrating MDGs into subnational planning." [OK, so the MDGs have been reflected in national planning documents. That’s a start.]

Beyond the quest for "policy implications": Alternative options for applied development researchers

Humanity Journal's picture

This post, written by Michael Woolcock, is a contribution to an online symposium on the changing nature of knowledge production in fragile states. Be sure to read other entries by Deval Desai and Rebecca Tapscott and Lisa Denney and Pilar Domingo.

Indonesia fills out form on riceMy nomination for development’s ‘Most Insightful, Least Cited’ paper is Ariel Heryanto’s “The development of ‘development.'”[1] Originally written in Indonesian in the mid-1980s, Heryanto’s gem has been cited a mere 79 times (according to Google Scholar), even in its carefully-translated English incarnation. For me, this paper is so wonderful because it makes, in clear and clever ways, two key points that bear endless repetition, especially to today’s junior scholars. The first point is that inference from evidence is never self-evident: significance must always be interpreted through theory. Consider the seemingly obvious fact that the sun rises in the east every morning, he writes. What could be more universally and unambiguously true? The problem, of course, is that the sun does not rise in the east; instead, despite every piece of sensory evidence to the contrary, the earth rotates counterclockwise on its axis and revolves around a stationary sun, making it appear as ifthe sun rises in the east. But we only know this – or, more accurately, claim to know this – because today we happen to have a theory, itself based on more complex forms of observation and theory, that helps us interpret the prevailing evidence, to reconcile it with evidence from analyses of other cosmic phenomena, and thus draw broadly coherent conclusions and inferences.

Heryanto’s second key point is that we are all captives of language, of the limits of any given tongue to convey the subtleties of complex issues. From this premise he proceeds to unpack the clumsy, alluring yet powerful word that in English we call ‘development’, noting that in Indonesian there are at least two very different interpretations of its meaning, and with this, two very different words – perkembangan and pembangunan – connoting two very different teleologies and policy agendas: the former a natural, ‘organic’ process akin to flowers blooming (“software”); the latter to an overt, intentional and ‘constructed’ political project of nation building (“hardware”). When translated into English, however, both perkembangan and pembangunan are typically rendered simply as “development,” thereby collapsing into a singular popular conception what in Indonesian discourse is a distinctly pluralist one. In the opening week of my class at the Kennedy School, which typically has 50 students who between them speak around 30 languages, we begin with a lively discussion of what “development” means in Arabic, Hindi, French, Turkish, Spanish, Swahili, Swedish… It turns out to mean all sorts of things.[2]

I open this way because I think the next article we need in this “genre” – though hopefully one that quickly transcends it because it is both highly insightful and highly cited! – is something akin to what Desai and Tapscott have begun with their ‘Tomayto Tomahto’ paper. In short, echoing Heryanto, we need more development research on development research. Such scholarship, however, would go beyond providing a mere chronology of changing professional styles, methodological emphases and funding characteristics (scale, sources, time horizons, expectations) to explanations of how and why such changes have occurred. Such explanations would be grounded in analyses of the shifting historical experiences and geo-political imperatives different generations of researchers have sought to accommodate, the particular ideas these experiences and imperatives rendered normative, and the concomitant gains and losses these changes have entailed for those finding themselves managing the “trade-offs” (such as they are) between scholarly independence and public utility.

Turning the gaze on ourselves: Acknowledging the political economy of development research

Humanity Journal's picture

This post by Lisa Denney and Pilar Domingo is a contribution to an online symposium from Humanity Journal on the changing nature of knowledge production in fragile states. Be sure to read other entries, beginning with Deval Desai and Rebecca Tapscott's piece.

IBM Research - Africa Scientists at Riara School, NairobiWhile researchers (ourselves included) now consistently underline the importance of understanding the political economy of developing countries and donors that support them in order to achieve better aid outcomes, the research industry remains largely ambivalent about questions of our own political economy. Desai and Tapscott’s paper is therefore a refreshing attempt to start unpacking this and the ways in which ‘evidence’ is produced within the development industry.

Here, we offer reflections on three stages of this process: building evidence, translating evidence and dislodging evidence. But a word of caution is also merited upfront. The fact that we are talking about “evidence,” rather than research, is itself telling and underscores a shift in the development industry in the last ten years. Speaking about ‘evidence’ rather than about “research” suggests something much more concrete and indisputable. Evidence is taken as proof. But surely research is also debate. While there are, of course, things for which largely indisputable evidence can be found (the effects of vaccines on disease, for instance), the use of this terminology, particularly in the social sciences where little is concrete or universal, suggests that final answers are discoverable. It can, thus, be used to close down debate, as much as to encourage it. Research, on the other hand, recognizes that most findings are contributions to knowledge that helpfully allow to move us towards deeper understanding and greater awareness but do not claim to be the final word on a given topic.
 

Tomayto tomahto: The research supply chain and the ethics of knowledge production

Humanity Journal's picture

Pre-test of Rural Household Survey, PakistanThis post is the first in a symposium from Humanity Journal on the changing nature of knowledge production in fragile states. It was written by Deval Desai, a Research Associate at ODI, and Rebecca Tapscott, a PhD Candidate at the Fletcher School at Tufts University.

Aid in the 21st century is increasingly evidence-driven. Between 2000 and 2006, the World Bank spent a total of $630 million on research. By 2011 the World Bank was spending $606 million per year, or about a quarter of its country budgets. In September of this year, by signing up to the Sustainable Development Goals, the global community enshrined a commitment to “increase significantly” a range of high-quality data over the next 15 years, to facilitate qualitative as well as quantitative understandings of growth and progress.

As the international community seeks to tackle the “hard problems” of development—fragility, conflict, endemic poverty—qualitative research is ever-more important. These problems are not amenable to best-practice solutions but must be tackled through deep contextual understanding of their drivers. Or so the policy story goes.[1] As a result, conducting qualitative research today is different from the days when Geertz set out for Bali. Gone are the intrepid individuals setting off to explore and explain an untouched environment, unaware of the demands of policymakers.[2]

We argue that while practice has changed, the ideology of qualitative research has not. Qualitative research is generally understood as the individual exercise of research methods to produce knowledge about the world, knowledge that can then be taken up by governance actors of all stripes. By contrast, we believe that today we must understand research as asystemic intervention, within the broader context of globalization and international development. Therefore, we should start with the political economy of contemporary research—an iterative, professionalized and increasingly saturated practice—to rethink the political and ethical implications of the research that we do.

As a first step to this end, we contrast two stylized frameworks for understanding qualitative research in fragile contexts: The “fragility research” framework, which we argue dominates the current debate; and the “research supply chain” framework, which we offer as a new framework and a provocation to discussion. We discuss each in turn, first considering how fragility research frames knowledge production in fragile or conflicted-affected states, identifying some assumptions the fragility research framework rests on, and critiquing some of its key conclusions. We then discuss the research supply chain as an alternative framework to explore the relationship between knowledge generation and policy. Finally, we raise some questions based on the new framework’s implications.

Have the MDGs affected developing country policies and spending? Findings of new 50 country study.

Duncan Green's picture

Portrait of childrenOne of the many baffling aspects of the post-2015/Sustainable Development Goal process is how little research there has been on the impact of their predecessor, the Millennium Development Goals. That may sound odd, given how often we hear ‘the MDGs are on/off track’ on poverty, health, education etc, but saying ‘the MDG for poverty reduction has been achieved five years ahead of schedule’ is not at all the same as saying ‘the MDGs caused that poverty reduction’ – a classic case of confusing correlation with causation.

So I gave heartfelt thanks when Columbia University’s Elham Seyedsayamdost got in touch after a previous whinge on this topic, and sent me her draft paper for UNDP which, as far as I know, is the first systematic attempt to look at the impact of the MDGs on national government policy. Here’s the abstract, with my commentary in brackets/italics. The full paper is here: MDG Assessment_ES, and Elham would welcome any feedback (es548[at]columbia[dot]edu):

"This study reviews post‐2005 national development strategies of fifty countries from diverse income groups, geographical locations, human development tiers, and ODA (official aid) levels to assess the extent to which national plans have tailored the Millennium Development Goals to their local contexts. Reviewing PRSPs and non‐PRSP national strategies, it presents a mixed picture." [so it’s about plans and policies, rather than what actually happened in terms of implementation, but it’s still way ahead of anything else I’ve seen]
 

‘Orderly traffic’ as a governance measure: a suggestion

Suvojit Chattopadhyay's picture

Traffic in IndiaMeasuring good governance can be tricky, but 'orderly traffic' can be used as an indicator since it offers insights beyond its limited definition.

As hard as it is to define ‘governance’, we have plenty of indicators to measure its quality: quality of key public services, extent of corruption, ease of doing business, etc. One of the challenges with these indicators is the distance between the process and outcomes, in particular, the assumptions involved in the translation of certain process into tangible outcomes. It follows that by mixing up indicators for processes and outcomes, we risk, well, measuring what doesn’t matter, and not measuring what does matter.

So as the title of this post suggests, could ‘orderly traffic’ be a good measure?

A familiar context: I live in Nairobi (and prior to that, in Delhi) and spend a considerable time waiting in traffic. What often makes traffic a problem is a complete lack of coordination amongst motorists on the road. However, I don’t think the onus of coordination at an intersection should rest on motorists – there are traffic lights or traffic police whose job it is to enforce discipline to ensure orderliness on the road. In many cities though, this is plain theory. In reality, traffic lights may not exist, or be broken; the traffic police may be absent, or just be incompetent. Motorists joust with each other every day and often end up creating gird-locks that hold everyone up. Please note that I am not talking about slow traffic caused purely due to long stops at intersections waiting for the lights to change. I am specifically concerned with the ‘orderliness’ of the flow. People waste time, fuel and a lot of their good humour (unless you are in a zen state) when they are in these gird-locks. It is usually more than evident to everyone whose fault it is and what the solution should be – and that usually only serves to raise tempers on the road. On days when the traffic flows smoothly, everyone seems happier. Zipping home after work is often the high point of the day.

Are We Measuring the Right Things? The Latest Multidimensional Poverty Index is Launched Today – What do You Think?

Duncan Green's picture

I’m definitely not a stats geek, but every now and then, I get caught up in some of the nerdy excitement generated by measuring the state of the world. Take today’s launch (in London, but webstreamed) of a new ‘Global Multidimensional Poverty Index 2014’ for example – it’s fascinating.

This is the fourth MPI (the first came out in 2010), and is again produced by the Oxford Poverty and Human Development Initiative (OPHI), led by Sabina Alkire, a definite uber-geek on all things poverty related. The MPI brings together 10 indicators, with equal weighting for education, health and living standards (see table). If you tick a third or more of the boxes, you are counted as poor.

Why Performance Measurement for Development Professionals is Critical: Learning from Miller’s Pyramid

Tanya Gupta's picture

According to a training report  no less than $55.4 billion in 2013 was spent on training, including payroll and external products and services, in the US alone. The US and other countries spend a significant amount of money on employee development with the implicit assumption that training is correlated to improved on- the- job performance.   However, what exactly should we measure to ensure that this money is well spent? What is it that we need to measure to determine that employees are performing as expected and thus benefitting from these training expenditures?

Two responses that we often get to this “what should be measured” question are “performance” and “competencies”. The Government Accountability Office (GAO) of the United States defines  performance measurement as the “ongoing monitoring and reporting of program accomplishments, particularly progress toward pre-established goals.”  Performance measures, therefore, help define what success at the workplace means (“accomplishments”), and attempt to quantify performance by tracking the achievement of goals. Competencies are generally viewed as “a cluster of related knowledge, skills, and attitudes” (Parry 1996), and are thought to be measurable, correlated to performance, and can be improved through training.  While closely connected, they are not the same thing. Competencies are acquired skills, while performance is use of those competencies at work. Measurement of both is critical.

How should a Post-2015 Agreement Measure Poverty? Vote for Your Preferred Methodology

Duncan Green's picture

The blog’s been insufficiently techie of late, so step forward ODI’s Emma Samman with a piece + poll on measurement. Maybe the start of a ‘Friday geek ‘ series?

Some one in five people today still cannot provide for their most basic needs, progress on Millennium Development Goal (MDG) 1 (to halve extreme poverty and hunger) notwithstanding. The High-Level Panel report affirms that ‘eradicating extreme poverty from the face of the earth by 2030’ should be at the core of a post-2015 agreement: ‘This is something that leaders have promised time and again throughout history. Today it can actually be done.’ The World Bank has endorsed this viewpoint, as have David Cameron, Barack Obama and The Economist, alongside several NGOs.

But is the goal ambitious enough – in terms of who it targets, and how? We’re exploring these issues as part of Development Progress, a four year project that aims to explore what’s working in development and why. We asked several experts to make proposals as to how to measure poverty in a post-2015 agreement. Their contributions show some consensus, but also several areas of contention.

Pages