Syndicate content

measurement

The work of measuring work

Kathleen Beegle's picture

Measurement is on my mind. Partly because of the passing of Alan Krueger (credited with having a major influence on the development of empirical research – notably his influential book Myth and Measurement). But also because a couple of weeks ago, I attended an all-day brainstorming meeting on “Methods and Measurement” hosted by Global Poverty Research Lab at Northwestern University and IPA. The workshop covered a range of topics on gaps and innovations in research methods related to measurement, such as: integrating data sources and applying new methods (such as satellite data and machine learning combined with household surveys to get improved yield estimates), untangling socioeconomic complex data (such as mapping social networks), crafting measurement of concepts where we lack consensus (e.g. financial health), and bringing new tech into our survey efforts (using smartphones, physical trackers, etc.).

Do conditional cash transfers empower women?

Markus Goldstein's picture
A couple of weeks ago, I blogged about a new approach to measuring within household decision making.   Continuing in that vein, I was recently reading a paper (ungated version here) by Almas, Armand, Attanasio, and Carneiro which offers a really n

Unpacking within household interactions: the roles people take

Markus Goldstein's picture
Some of us often try to understand how households may be functioning by using intrahousehold decision making questions.   For example, the multi country Demographic and Health Surveys often ask who makes decisions on large household purchase: the male, the female or the two together.    The idea is that this kind of question helps us understand power dynamics.   And there is a fair bit of correlational work that suggests this makes sense.  
 

Sex, Lies, and Measurement: Do Indirect Response Survey Methods Work? (No…)

Berk Ozler's picture

Smart people, mainly with good reason, like to make statements like “Measure what is important, don’t make important what you can measure,” or “Measure what we treasure and not treasure what we measure.” It is rumored that even Einstein weighed in on this by saying: “Not everything that can be counted counts and not everything that counts can be counted.” A variant of this has also become a rallying cry among those who are “anti-randomista,” to agitate against focusing research only on questions that one can answer experimentally.

However, I am confident that all researchers can generally agree that there is not much worse than the helpless feeling of not being able to vouch for the veracity of what you measured. We can deal with papers reporting null results, we can deal with messy or confusing stories, but what gives no satisfaction to anyone is to present some findings and then having to say: “This could all be wrong, because we’re not sure the respondents in our surveys are telling the truth.” This does not mean that research on sensitive topics does not get done, but like the proverbial sausage, it is necessary to block out where the data came from and how it was made.

The shifting gravity of global poverty

Daniel Mahler's picture

Thirty years ago, 1 in 7 of the world’s extreme poor – those living on less than $1.90 a day – were in Sub-Saharan Africa. Over the years, as other regions successfully reduced their poverty levels, this number has increased and by 2015, 4 in 7 of the global poor were living in Sub-Saharan Africa. The newly published Poverty and Shared Prosperity Report warns that as many as 9 in 10 of the world’s poor may live in this region by 2030 if current trends continue.

Increasing performance transparency! Generating citizen participation! Improving local government! It's SUPERMUN

Marcus Holmlund's picture

Running a local government is not sexy. It’s making sure that roads are maintained, there is water to drink, health clinics are stocked and staffed, and schools are equipped to teach. Often, it means doing these things with limited resources, infrastructure, and manpower. With few exceptions, there is little fanfare and glamour. It’s a bit like being a soccer referee: you’re doing a good job when no one notices you’re there.

The Economic Case for Early Learning

Harry A. Patrinos's picture
Also available in: Español | العربية 

 

Photo credit World Bank

We are living in a learning crisis.  According to the World Bank’s 2018 World Development Report, millions of students in developing countries are in schools that are failing to educate them to succeed in life. According to the UNESCO Institute of Statistics, there are 617 million children and youth of primary and secondary school age who are not learning the basics in reading, two-thirds of whom are attending school. The urgency to invest in learning is clear.

Measuring the tricky things

Varun Gauri's picture

Along with the Center for Experimental Social Science at Nuffield College at Oxford, eMBeD co-organized a conference called “Measuring the Tricky Things.” The lineup included Susan Fiske presenting a magisterial overview of her decades-long work on the stereotype content model, Armin Falk on his groundbreaking study of time, risk, and social preferences among 80,000 individuals in 65 countries, Karla Hoff on using lab in field experiments to identify the honor ethic among higher caste villagers in North India, Ryan Enos on measuring racial attitudes, Rachel Glennerster on measuring women’s empowerment, Julian Jamison on how and why to use item count techniques to mitigate social desirability bias, Henry Travers on debiasing estimates of wildlife survival, Amandi Mani on assessing the effect of financial worry on cognitive performance with cell phones, and Sheheryar Banuri on using videos to probe the effect of pro-poor bonuses on doctor’s decisions on which patients to see. My eMBeD co-head Renos Vakis assessed the strengths and weaknesses of World Bank surveys on socio-emotional skills. I discussed the reliability and validity of measurements of social norms with respect to women’s labor force participation in Jordan.  

Why the World Bank is adding new ways to measure poverty

Maria Ana Lugo's picture

The 2018 Poverty and Shared Prosperity Report shows how poverty is changing and introduces improved ways to monitor our progress toward ending it.

The landscape of extreme poverty is now split in two. While most of the world has seen extreme poverty fall to below 3 percent of the population, Sub-Saharan Africa is experiencing extreme poverty rates affecting more than 40 percent of people. The lamentable distinction of being home to the most people living in extreme poverty has shifted, or will soon shift, from India to Nigeria, symbolizing the increased concentration of poverty in Africa.

How can machine learning and artificial intelligence be used in development interventions and impact evaluations?

David McKenzie's picture

Last Thursday I attended a conference on AI and Development organized by CEGA, DIME, and the World Bank’s Big Data groups (website, where they will also add video). This followed a World Bank policy research talk last week by Olivier Dupriez on “Machine Learning and the Future of Poverty Prediction” (video, slides). These events highlighted a lot of fast-emerging work, which I thought, given this blog’s focus, I would try to summarize through the lens of thinking about how it might help us in designing development interventions and impact evaluations.

A typical impact evaluation works with a sample S to give them a treatment Treat, and is interested in estimating something like:
Y(i,t) = b(i,t)*Treat(i,t) +D’X(i,t) for units i in the sample S
We can think of machine learning and artificial intelligence as possibly affecting every term in this expression:


Pages