This post, written by Michael Woolcock, is a contribution to an online symposium on the changing nature of knowledge production in fragile states. Be sure to read other entries by Deval Desai and Rebecca Tapscott and Lisa Denney and Pilar Domingo.
My nomination for development’s ‘Most Insightful, Least Cited’ paper is Ariel Heryanto’s “The development of ‘development.'” Originally written in Indonesian in the mid-1980s, Heryanto’s gem has been cited a mere 79 times (according to Google Scholar), even in its carefully-translated English incarnation. For me, this paper is so wonderful because it makes, in clear and clever ways, two key points that bear endless repetition, especially to today’s junior scholars. The first point is that inference from evidence is never self-evident: significance must always be interpreted through theory. Consider the seemingly obvious fact that the sun rises in the east every morning, he writes. What could be more universally and unambiguously true? The problem, of course, is that the sun does not rise in the east; instead, despite every piece of sensory evidence to the contrary, the earth rotates counterclockwise on its axis and revolves around a stationary sun, making it appear as ifthe sun rises in the east. But we only know this – or, more accurately, claim to know this – because today we happen to have a theory, itself based on more complex forms of observation and theory, that helps us interpret the prevailing evidence, to reconcile it with evidence from analyses of other cosmic phenomena, and thus draw broadly coherent conclusions and inferences.
Heryanto’s second key point is that we are all captives of language, of the limits of any given tongue to convey the subtleties of complex issues. From this premise he proceeds to unpack the clumsy, alluring yet powerful word that in English we call ‘development’, noting that in Indonesian there are at least two very different interpretations of its meaning, and with this, two very different words – perkembangan and pembangunan – connoting two very different teleologies and policy agendas: the former a natural, ‘organic’ process akin to flowers blooming (“software”); the latter to an overt, intentional and ‘constructed’ political project of nation building (“hardware”). When translated into English, however, both perkembangan and pembangunan are typically rendered simply as “development,” thereby collapsing into a singular popular conception what in Indonesian discourse is a distinctly pluralist one. In the opening week of my class at the Kennedy School, which typically has 50 students who between them speak around 30 languages, we begin with a lively discussion of what “development” means in Arabic, Hindi, French, Turkish, Spanish, Swahili, Swedish… It turns out to mean all sorts of things.
I open this way because I think the next article we need in this “genre” – though hopefully one that quickly transcends it because it is both highly insightful and highly cited! – is something akin to what Desai and Tapscott have begun with their ‘Tomayto Tomahto’ paper. In short, echoing Heryanto, we need more development research on development research. Such scholarship, however, would go beyond providing a mere chronology of changing professional styles, methodological emphases and funding characteristics (scale, sources, time horizons, expectations) to explanations of how and why such changes have occurred. Such explanations would be grounded in analyses of the shifting historical experiences and geo-political imperatives different generations of researchers have sought to accommodate, the particular ideas these experiences and imperatives rendered normative, and the concomitant gains and losses these changes have entailed for those finding themselves managing the “trade-offs” (such as they are) between scholarly independence and public utility.
If Heryanto is right, and I think he is, then a central task of this story is to explain, firstly, the ways in which itemized lists of “policy implications” came to trump social scientific ‘theory’ as the necessary counterpoint to, rationale for, and vindicator of, “evidence.” And secondly, it must explain the ways in which particular linguistic tropes were colonized and deployed in the service of promoting exceedingly narrow research programs, ones primarily serving the interests of elite researchers themselves (i.e., securing publications in prestigious journals) rather than those of real officials with real responsibilities for responding to real problems in real countries under real constraints. Juxtaposing “anecdotal” with “rigorous” evidence, for example, asserting that “rigor” is synonymous with “randomized controlled trials,” and that the task of research is to provide “rigorous evidence” of “what works” to “busy policymakers” so that they be assured they adopting international ‘best practices’ is almost entirely an achievement of advocacy rather than, ironically, evidence that this is actually how effective policymaking and development outcomes have been achieved.
So understood, I think Desai and Tapscott have helpfully initiated what should be a fruitful but long overdue conversation among those funding, overseeing, assessing, conducting, or training to conduct, development research. Their comparison of the “ragility research” (FR) framework with a “research supply chain” (RSC) framework, and their particular concerns with ‘saturated’ research sites, leads me to three initial reflections. First, I suspect there is a sense in which RSC is a direct product of the success of FR, if only by default. Having long endured subservient status in the development research peck order, and having long made the case that the hegemonic dominance of exclusively quantitative paradigms and data were not just unnecessarily restricting policy options but actively complicit in providing misguided policy advice – as deftly conveyed in the blistering critique by anthropologist Mike McGovern of the work by influential economists on fragile states – the somewhat more serious attention now being accorded the importance of “understanding context,” for example, is surely something to be welcomed, even if such realizations have only come about because of the outright failure of orthodoxy in too many cases to “deliver results.”
But be careful what you wish for, I hear Deval and Tapscott implicitly saying. Once the policymaking apparatus actually begins to listen to anthropological pleas, and responds with correspondingly greater resources and opportunities to influence policy and practice, a new set of (potentially problematic) realities kicks in. Reasonable researchers can make different choices at this juncture, but the trade-offs are seemingly stark. One can maintain notional scholarly ‘independence’ but be given modest resources and access, and thus generate findings based on an evidence base of severely modest scale and scope that, in all likelihood, will have little influence on practical responses to consequential issues one purports to care about (and which presumably justified, at least in part, the researcher’s choice of topic in the first place). Alternatively, one can accept considerably large sums of funding and potential access, but the inherent price is almost always being required to generate “actionable” findings expressed in a language that maps onto prevailing policy instruments. The relentless Scottian imperative to “see like a state” – to render “thin simplifications” of complex realities – can leave qualitative researchers feeling like they have had to make too many compromises, that having their hard-won research findings reduced to a “box” or an epigraph (or to providing “color,” as a dismissive economic colleague once put it) is too high a price to pay. For anthropologists such as David Mosse, most serious social research of inherently complex processes will give rise to policy advice that is “unimplementable.”
Need it be so? If qualitative researchers, in “fragile” or any other setting, construe their task in this way I think they are doomed to disappointment. But perhaps there are other options. My second response to Desai and Tapscott is that there are more hopeful ways forward for intrepid qualitative researchers in fragile settings, no matter where (or even if) one becomes caught up in the “research supply chain.” If one maintains a commitment to praxis, but sees a focus on “policy reform” itself as the problem, then new vistas potentially open up. Inspired in part by qualitative researchers and social theorists themselves, some have now begun to appreciate that a key development challenge across the world, even or especially in fragile states and middle-income countries, is low capability for policy implementation. No matter the foundation or content of a given “policy” in such countries, the binding constraint on its realization is whether a designated organizational apparatus is in place that can actually deliver it – consistently, for all, at scale, legitimately, and at incrementally increasing levels of sophistication. Virtually every country has noble education policies, for example, but precious few developing countries can actually combine school buildings, trained teachers and colorful textbooks into minimally educated students. By extension, the abiding challenge is figuring out how to “build” such organizational capability in those contexts and/or in those sectors where it is lacking.
Orthodoxy’s response to this challenge has been at best modest, and at worst itself part of the problem. In search of alternatives, some have seen hopeful possibilities in more “experimental” approaches to reform, where the task of research is not to discern, replicate and import standardized ‘best practices’ but to embed researchers within projects and programs as they unfold, providing real-time feedback and guidance to frontline implementers. Within the World Bank, this approach has been championed explicitly in the Social Observatory in India (which tracks massive livelihoods projects in rural areas) and the Global Delivery Initiative (which generates detailed case studies of implementation dynamics to identify particular aspects of projects that are working or not during implementation). Implicitly, similar approaches have been adopted in successful civil service reform initiatives in countries ranging from Sierra Leone and Nigeria to Tajikistan and Indonesia. An embryonic global movement of development practitioners, who have committed themselves to “Doing Development Differently,” also embodies this approach, one in which research (both qualitative and quantitative) is central but not because it seeks to provide findings that amount merely to a laundry list of generic ‘policy implications’.
My third and final observation pertains to Desai and Tapscott’s concerns over “saturation” – that certain sites that are either easy to access or exhibit (at least at one time) some especially compelling development phenomena find themselves being visited on an overly frequent basis by a veritable parade of officials and researchers. In one sense, “saturated settings”, too, are a function of success: it’s important for senior staff to have some tangible sense of life “in the field”, and time is always of the essence, yet anyone even vaguely familiar with organizing or participating in such visits is acutely aware of how contrived and superficial they usually are. Being subjected to such visits on a regular basis is deeply unfortunate for those residing in these settings, but their plight is an empirical one and should be addressed as such: How many such settings are there in a given geo-political space, and has this increased over time? What is actually happening to these communities as a result of such frequent interactions? There is a literature here waiting to be created. While acknowledging the reality of saturation, however, I suspect that, in the grand scheme of things, it is a relatively minor concern. Any diligent and ethically minded field researcher – and her/his supervisors and funders – would be attuned to “saturation” concerns (real as they are), and the ratio of serious researchers to possible research sites is surely very small (with the exception of those studying very small populations): there are perhaps thousands of such researchers, but there remain billions of poor and marginalized peoples.
For now, I think the Fragility Research and Research Supply Chain perspectives are largely symbiotic. The first point of departure for those self-consciously seeking to reconcile them is being explicit with one’s funders, supervisors, subjects, and audiences, and ultimately with oneself, about the serious challenges (and, if necessary, trade-offs) involved in making methodologically sound and ethically informed decisions. The second, perhaps, is seeking to change the central tone and terms of debate in applied development research, and seeking less to influence an abstraction called “policy” – which is (and should be) largely determined by domestic political processes – and more to helping those charged with implementing it in those communities that surely need it most.
Photograph by International Rice Research Institute (IRRI) via Flickr, some rights reserved
 My thanks to Anna Winoto for drawing Heryanto’s essay to my attention after the opening class in 2013, and for clarifying the different linguistic roots and connotations ofperkembangan and pembangunan.
 See Mike McGovern, “Popular development economics: An anthropologist among the Mandarins,” Perspectives on Politics 9, no. 2 (2011): 345-55.
 See James Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven: Yale University Press, 1998).
 David Mosse, “Is good policy unimplementable? Reflections on the ethnography of aid policy and practice,” Development and Change 35, no. 4 (2004): 639-671.
 Lant Pritchett, The Rebirth of Education: Schooling Ain’t Learning (Washington, DC: Center for Global Development, 2013).
 Lant Pritchett, Michael Woolock and Matt Andrews, “Looking like a state: Techniques of persistent failure in state capability for implementation,”Journal of Development Studies49, no. 1 (2013): 1-18.
 A discussion of this literature is provided in Deval Desai and Michael Woolcock, “Experimental justice reform: Lessons from the World Bank and beyond,” Annual Review of Law and Social Science 11 (2105): 155-74.