Syndicate content

research

Four examples of cutting-edge research on labor topics

Esteve Sala's picture
One of the topics of the 30 research papers is female labor in Latin American countries. (Photo: Charlotte Kesl / World Bank)

Economic research is essential for designing and implementing evidence-based solutions to improve job opportunities. In a recent conference organized by the World Bank and IZA, researchers from around the world presented over 30 research papers on important labor topics such as migration, gender, youth employment, and labor policies in low-income countries. Here is an illustrative sample of four innovative works presented during the conference.

Open access resources

Elisa Liberatori Prati's picture


This blog post is a part of the International Open Access Week blog series

Thanks to Open Access (OA), scientists, health care professionals, libraries, and institutions facing budget limitations can access scholarly publications at little or no cost. Claire Guimbert, Research Librarian in ITS Knowledge and Information has gathered just a few of the many resources from outside the World Bank that our library staff has found helpful: 

Academic libraries and open access resources in Latin America

Elisa Liberatori Prati's picture

This blog post is a part of the International Open Access Week blog series

In our continuing blog series leading up to International Open Access Week (October 23-27), Eduardo E. Quintero Orta, Research Librarian in ITS Knowledge and Information* discusses the importance and prevalence of Open Access to research in Latin America:

“Education is a powerful driver of development and one of the strongest instruments for reducing poverty and improving health, gender equality, peace, and stability”

Can access to the World Bank archives improve health outcomes?

Elisa Liberatori Prati's picture
© Dominic Chavez/World Bank


This blog post is a part of the International Open Access Week blog series

The World Bank is committed to transparency and accountability and welcomes opportunities to explain its work to the widest audience possible. Openness promotes engagement with stakeholders, which in turn, improves the design and implementation of projects and policies, and strengthens development outcomes.

Knowing what we don’t know (on the web)

Tanya Gupta's picture
Welcome to the third blog of the technology aided gut (TAG) checks series. In this year long skills transfer blog series we use an interactive and just-in-time learning strategy to help you learn to do TAG checks on your data.
 
In our last posting we talked about six techniques to make our questions more precise so as to get the best answers from the Web. In this blog, we look at the other side of the equation: how can we be reasonably confident that the answers we get from an online resource are correct? How can we know that the web has given us the right answer when we do not have the subject matter expertise ourselves?


Path to “Confucian” wisdom

How to know what you don’t know

The adage “True wisdom is knowing what you don't know” has been attributed to Confucius. While addressing this philosophical statement is beyond the scope of this blog, it is appropriate to title a pragmatic article borrowing from ancient wisdom. Knowing what you do not  know is the essential problem of learning in the modern era. Legacy learning depends on teachers and textbooks who you can rely on to be correct. However, for contemporary learning - how can you tell the correct from the incorrect if you don’t have sufficient knowledge of a domain?
 
We describe a four step process one can use to eliminate the really bad answers and get a decent idea of which ones are very good.
 
The process may not be able guarantee the answers we got are absolutely correct, but the level of accuracy of the answers we will get by following the process will be useful in most cases.

Found a positive impact, published in a peer-reviewed journal. What more do we need?

Urmy Shukla's picture

Family utilizes protective malaria bed nets in their home, Nigeria In this blog, we advocate the importance of in-depth reporting on implementation processes, evaluation processes, and relevant contextual details of interventions and linked evaluations. This will facilitate research transparency, as well as assessments of both learning and the potential for generalizability beyond the original study setting (learning lessons from ‘there’ for ‘here,’ but not necessarily promoting the strict and exact duplication of a program from one setting to another, in line with an understanding of external validity that is appropriate for the social sciences in development).
 
We start with a hypothetical scenario of an intervention and associated evaluation, based on too-frequent experiences in the impact evaluation space. We hope that it doesn’t sound familiar to those of you who have been involved in evaluation or have tried to make sense of evaluation results -- but suspect that it will.
 
A research team, connected to a larger research and evaluation organization, ran a study on an intervention. For reasons of statistical and political significance, they have deemed it sufficiently successful and worthy of scaling up, at least in a very specific new setting. 
 
The intervention sought to overcome the following problem, for which there are supply-side and demand-side issues. People in malarious areas may procure a bednet (whether for free or for a positive price), but they do not always follow-through with maintenance (re-treatment or replacement).
 
For supply, the private sector only sporadically offers retreatment and replacement, and it is expensive, while the public sector does not always have supplies available. The intervention, therefore, concentrates provision of this service at a specific time and place through temporary service centers.
 
For demand, people with nets often don’t understand the need for retreatment and, even if they do, continuously put off doing so. The intervention, therefore, included a non-monetary incentive for which there is local demand (in this case, soap) to be picked up at the time of net retreatment.

Knowledge production: An essential tool for public policy in Africa

Françoise Rivière's picture



Over the past five years, the Agence Française de Développement (AFD) and the World Bank Group have coproduced 20 volumes on various dimensions of development in Africa. The Africa Development Forum (ADF) book series has addressed subjects including the agricultural, demographic, climatic, and environmental challenges facing African countries, as well as the various methods of financing infrastructure, cities, and social safety nets. In-depth research brings to light specific and diverse situations encountered around the continent. Moving beyond the results of such endeavors, the question remains of how to conduct research that can make a pertinent and meaningful contribution to public policy. Two fundamental tools are required: robust, and often times original, data and cutting-edge research. This research must not only be connected to international realities; it must be firmly anchored in African realities and geared toward public policy making.

From method to market: Some thoughts on the responses to "Tomayto tomahto"

Humanity Journal's picture

In this final post, Deval Desai and Rebecca Tapscott respond to comments by Lisa Denney and Pilar Domingo, Michael WoolcockMorten Jerven, Alex de Waal, and Holly Porter.

Paktika Youth Shura Our paper, Tomayto Tomahto, is in essence an exhortation and an ethical question. The exhortation: treat and unpack fragility research (for we limit our observations to research conducted for policy-making about fragile and conflict-affected places) as an institution of global governance, a set of complex social processes and knowledge practices that produce evidence as part of policy-making. The ethical question: all institutions contain struggles over the language and rules by which they allocate responsibility between individual actors (ethics) and structural factors (politics) for their effects—this might be law, democratic process, religious dictate. In light of the trends of saturation and professionalization that we identify (and as Jerven astutely points out in his response, a profound intensification of research), is it still sufficient to allocate responsibility for the effects of fragility research using the language and rules of method?

The five responses to our piece enthusiastically take up the exhortation. A series of positions are represented: the anthropologist (Porter), the applied development researcher (Denney and Domingo), the anthropologist/practitioner (DeWaal), the practitioner/sociologist (Woolcock), and the economist (Jerven). They unpack the profoundly socio-political nature of the relationship between research and policy from a number of different perspectives: Porter’s intimate view from the field, Jerven’s sympathetic ear in the statistics office, Woolcock and Denney and Domingo’s feel for the alchemic moments when research turns into policy at the global level, and de Waal’s distaste for the global laboratories in which those moments occur, preferring the local re-embedding of research. These all, of course, spatialize the research-policy nexus, just as we do; however, all then ask us to privilege one space over the others.

Avoiding perversions of evidence-informed decision-making

Suvojit Chattopadhyay's picture

Emanuel Migo giving a presentation in Garantung village, Palangkaraya, Central Kalimantan, Indonesia.How to avoid “We saw the evidence and made a decision…and that decision was: since the evidence didn’t confirm our priors, to try to downplay the evidence”

Before we dig into that statement (based-on-a-true-story-involving-people-like-us), we start with a simpler, obvious one: many people are involved in evaluations. We use the word ‘involved’ rather broadly. Our central focus for this post is people who may block the honest presentation of evaluation results.

In any given evaluation, there are several groups of organizations and people with stake in an evaluation of a program or policy. Most obviously, there are researchers and implementers. There are also participants. And, for much of the global development ecosystem, there are funders of the program, who may be separate from the funders of the evaluation. Both of these may work through sub-contractors and consultants, bringing yet others on board.

Our contention is that not all of these actors are currently, explicitly acknowledged in the current transparency movement in social science evaluation, with implications for the later acceptance and use of the results. The current focus is often on a contract between researchers and evidence consumers as a sign that, in Ben Olken’s terms, researchers are not nefarious and power (statistically speaking) -hungry (2015). To achieve its objectives, the transparency movement requires more than committing to a core set of analyses ex ante (through pre-analysis or commitment to analysis plans) and study registration.

To make sure that research is conducted openly at all phases, transparency must include engaging all stakeholders — perhaps particularly those that can block the honest sharing of results. This is in line with, for example, EGAP’s third research principle on rights to review and publish results. We return to some ideas of how to encourage this at the end of the blog.


Pages