This post is co-authored with Thomas Pave Sohnesen
Since 2011, we have struggled to reconcile the poverty trends from two complementary poverty monitoring sources in Malawi. From 2005 to 2009, the Welfare Monitoring Survey (WMS) was used to predict consumption and showed a solid decline in poverty. In contrast, the 2004/05 and 2010/11 rounds of the Integrated Household Survey (IHS) that measured consumption through recall-based modules showed no decline.
Today’s blog post is about a household survey experiment and our working paper, which can, at least partially, explain why complementary monitoring tools could provide different results. The results are also relevant for other tools that rely on vastly different instruments to measure the same outcomes.
This post is co-authored with Thomas Pave Sohnesen
This post is co-authored with Marshall Burke.
One morning last August a number of economists, engineers, Silicon Valley players, donors, and policymakers met on the UC-Berkeley campus to discuss frontier topics in measuring development outcomes. The idea behind the event was not that economists could ask experts to create measurement tools they need, but instead that measurement scientists could tell economists about what was going on at the frontier of measuring development-related outcomes. Instead of waiting for pilot results, we decided to blog about some of these ideas and get inputs from Development Impact readers. In this series, we start with recent progress on measuring (“remote-sensing”) agricultural crop yields from space.
Yesterday the World Bank released their first report on the socioeconomic impacts of Ebola that was based on household data. The report provides a number of new insights into the crisis in Liberia, showing, for example, an unexpected resiliency in agriculture, and broader economic impacts than previously believed in areas outside the main zones of infection. As widely reported, prices for staple crops (such as rice) have jumped well above seasonal increases, but additionally we find an important income effect. We also find the highest prices in the remote southeast of the country, an area that has been relatively unaffected by the disease. The link to the full report can be found here.
I’m definitely not a stats geek, but every now and then, I get caught up in some of the nerdy excitement generated by measuring the state of the world. Take today’s launch (in London, but webstreamed) of a new ‘Global Multidimensional Poverty Index 2014’ for example – it’s fascinating.
This is the fourth MPI (the first came out in 2010), and is again produced by the Oxford Poverty and Human Development Initiative (OPHI), led by Sabina Alkire, a definite uber-geek on all things poverty related. The MPI brings together 10 indicators, with equal weighting for education, health and living standards (see table). If you tick a third or more of the boxes, you are counted as poor.
According to a training report no less than $55.4 billion in 2013 was spent on training, including payroll and external products and services, in the US alone. The US and other countries spend a significant amount of money on employee development with the implicit assumption that training is correlated to improved on- the- job performance. However, what exactly should we measure to ensure that this money is well spent? What is it that we need to measure to determine that employees are performing as expected and thus benefitting from these training expenditures?
Two responses that we often get to this “what should be measured” question are “performance” and “competencies”. The Government Accountability Office (GAO) of the United States defines performance measurement as the “ongoing monitoring and reporting of program accomplishments, particularly progress toward pre-established goals.” Performance measures, therefore, help define what success at the workplace means (“accomplishments”), and attempt to quantify performance by tracking the achievement of goals. Competencies are generally viewed as “a cluster of related knowledge, skills, and attitudes” (Parry 1996), and are thought to be measurable, correlated to performance, and can be improved through training. While closely connected, they are not the same thing. Competencies are acquired skills, while performance is use of those competencies at work. Measurement of both is critical.
Paper 1: List randomization for measuring illegal migration
The blog’s been insufficiently techie of late, so step forward ODI’s Emma Samman with a piece + poll on measurement. Maybe the start of a ‘Friday geek ‘ series?
Some one in five people today still cannot provide for their most basic needs, progress on Millennium Development Goal (MDG) 1 (to halve extreme poverty and hunger) notwithstanding. The High-Level Panel report affirms that ‘eradicating extreme poverty from the face of the earth by 2030’ should be at the core of a post-2015 agreement: ‘This is something that leaders have promised time and again throughout history. Today it can actually be done.’ The World Bank has endorsed this viewpoint, as have David Cameron, Barack Obama and The Economist, alongside several NGOs.
But is the goal ambitious enough – in terms of who it targets, and how? We’re exploring these issues as part of Development Progress, a four year project that aims to explore what’s working in development and why. We asked several experts to make proposals as to how to measure poverty in a post-2015 agreement. Their contributions show some consensus, but also several areas of contention.