Syndicate content

measurement

Skills and agricultural productivity

Markus Goldstein's picture
Do skills matter for agricultural productivity?   Rachid Laajaj and Karen Macours have a fascinating new paper out which looks at this question.   The paper is fundamentally about how to measure skills better, and they put a serious amount of work into that.    But for those of you dying to know the answer – skills do matter, with cognitive, noncognitive, and technical skills explaining about 12.1 to 16.6 of the variation in yields.   Before we delve into that

Tony Atkinson (1944 – 2017) and the measurement of global poverty

Francisco Ferreira's picture

Sir Anthony Atkinson, who was Centennial Professor at the London School of Economics and Fellow of Nuffield College at Oxford, passed away on New Year’s Day, at the age of 72. Tony was a highly distinguished economist: He was a Fellow of the British Academy and a past president of the Econometric Society, the European Economic Association, the International Economic Association and the Royal Economic Society.  He was also an exceedingly decent, kind and generous man.

Although his contributions to economics are wide-ranging, his main field was Public Economics. He was an editor of the Journal of Public Economics for 25 years, and his textbook “Lectures on Public Economics”, co-authored with Joe Stiglitz in 1980, remains a key reference for graduate students to this day. Within the broad field of public economics, Tony published path-breaking work on the measurement, causes and consequences of poverty and inequality – from his early work on Lorenz dominance in 1970, all the way to his more recent joint work with Piketty, Saez and others on the study of top incomes. Over his 50-year academic career, he taught, supervised and examined a large number of PhD students, some of whom came to work at the World Bank at some point in their careers.

Measuring inequality isn’t easy or straightforward - Here’s why

Christoph Lakner's picture

This is the third of three blog posts on recent trends in national inequality.

In earlier blogposts on recent trends in inequality, we had referred to measurement issues that make this exercise challenging. In this blogpost we discuss two such issues: the underlying welfare measure (income or consumption) used to quantify the extent of inequality within a country, and the fact that estimates of inequality based on data from household surveys are likely to underreport incomes of the richest households. There are a number of other measurement challenges, such as those related to survey comparability, which are discussed in Poverty and Shared Prosperity 2016 – for a focus on Africa, also see Poverty in a Rising Africa, published earlier in 2016.

Biting back at malaria: On treatment guidelines and measurement of health service quality

Arndt Reichert's picture

Growing up in a tropical country, one of us (Alfredo) was acutely aware of mosquito-borne diseases such as dengue and malaria. For many years now, vector-control strategies were—and still are—promoted by government- and school-led campaigns to limit the spread of these diseases. Consequently, it is somewhat alarming to know that diseases spread by mosquitoes remain an enormous challenge facing large parts of the developing and even developed world, particularly sub-Saharan Africa. It is perhaps less surprising that our shared interest in the health sector has resulted in a joint paper on assessing the overall quality of the health care system via compliance with established treatment guidelines.

Towards a survey methodology methodology: Guest post by Andrew Dillon

When I was a graduate student and setting off on my first data collection project, my advisors pointed me to the ‘Blue Books’ to provide advice on how to make survey design choices.  The Glewwe and Grosh volumes are still an incredibly useful resource on multi-topic household survey design.  Since the publication of this volume, the rise of panel data collection, increasingly in the form of randomized control trials, has prompted a discussion abo

Issues of data collection and measurement

Berk Ozler's picture
About five years ago, soon after we started this blog, I wrote a blog post titled “Economists have experiments figured out. What’s next? (Hint: It’s Measurement)” Soon after the post, I had folks from IPA email me saying we should experiment with some important measurement issues, making use of IPA’s network of studies around the world.

What’s New in Measuring Subjective Expectations?

David McKenzie's picture

Last week I attended a workshop on Subjective Expectations at the New York Fed. There were 24 new papers on using subjective probabilities and subjective expectations in both developed and developing country settings. I thought I’d summarize some of the things I learned or that I thought most of interest to me or potentially our readers:

Subjective Expectations don’t provide a substitute for impact evaluation
I presented a new paper I have that is based on the large business plan competition I conducted an impact evaluation of in Nigeria.  Three years after applying for the program, I elicited expectations from the treatment group (competition winners) of what their businesses would be like had they not won, and from the control group of what their businesses would have been like had they won. The key question of interest is whether these individuals can form accurate counterfactuals. If they could, this would allow us a way to measure impacts of programs without control groups (just ask the treated for counterfactuals), and to derive individual-level treatment effects. Unfortunately the results show neither the treatment nor control group can form accurate counterfactuals. Both overestimate how important the program was for businesses: the treatment group thinks they would be doing worse off if they had lost than the control group actually is doing, while the control group thinks they would be doing much better than the treatment group is actually doing. In a dynamic environment, where businesses are changing rapidly, it doesn’t seem that subjective expectations can offer a substitute for impact evaluation counterfactuals.

From method to market: Some thoughts on the responses to "Tomayto tomahto"

Humanity Journal's picture

In this final post, Deval Desai and Rebecca Tapscott respond to comments by Lisa Denney and Pilar Domingo, Michael WoolcockMorten Jerven, Alex de Waal, and Holly Porter.

Paktika Youth Shura Our paper, Tomayto Tomahto, is in essence an exhortation and an ethical question. The exhortation: treat and unpack fragility research (for we limit our observations to research conducted for policy-making about fragile and conflict-affected places) as an institution of global governance, a set of complex social processes and knowledge practices that produce evidence as part of policy-making. The ethical question: all institutions contain struggles over the language and rules by which they allocate responsibility between individual actors (ethics) and structural factors (politics) for their effects—this might be law, democratic process, religious dictate. In light of the trends of saturation and professionalization that we identify (and as Jerven astutely points out in his response, a profound intensification of research), is it still sufficient to allocate responsibility for the effects of fragility research using the language and rules of method?

The five responses to our piece enthusiastically take up the exhortation. A series of positions are represented: the anthropologist (Porter), the applied development researcher (Denney and Domingo), the anthropologist/practitioner (DeWaal), the practitioner/sociologist (Woolcock), and the economist (Jerven). They unpack the profoundly socio-political nature of the relationship between research and policy from a number of different perspectives: Porter’s intimate view from the field, Jerven’s sympathetic ear in the statistics office, Woolcock and Denney and Domingo’s feel for the alchemic moments when research turns into policy at the global level, and de Waal’s distaste for the global laboratories in which those moments occur, preferring the local re-embedding of research. These all, of course, spatialize the research-policy nexus, just as we do; however, all then ask us to privilege one space over the others.


Pages