Syndicate content

David McKenzie's blog

Development Impact turns 6: six questions for our sixth birthday

David McKenzie's picture
We are proud to have kept the blog going for another year, and would like to note its 6th birthday. In lieu of presents, we’d love your thoughts on what things you would like to see more or less of going forward. In particular, any comments or feedback on the following would be great:

Weekly links March 31: inequality across firms and within households, measuring farm labor, should we fund research by lottery, and more…

David McKenzie's picture
  • In the Harvard Business Review, Nick Bloom discusses how a lot of inequality is getting driven by differences between firms: “companies are paying more to get more: boosting salaries to recruit top talent or to add workers with sought-after skills. The result is that highly skilled and well-educated workers flock to companies that can afford to offer generous salaries, benefits, and perks — and further fuel their companies’ momentum. Employees in less-successful companies continue to be poorly paid and their companies fall further behind”
  • Vox EU piece by Brown, van de Walle and Ravallion summarizing their work in two recent papers on the difficulties in targeting the poor “about three-quarters of underweight women and undernourished children are not found in the poorest 20% of households. This is consistent with evidence of considerable intra-household inequality”

Weekly links March 24: why those of us in our 40s matter so much, an ALMP program that may be working, more CSAE round-ups, and more…

David McKenzie's picture

The Iron Law of ALMPs: Offer a Program to 100 People, maybe 2 get jobs

David McKenzie's picture

I have just finished writing up and expanding my recent policy talk on active labor market policies (ALMPs) into a research paper (ungated version) which provides a critical overview of impact evaluations on this topic. While my talk focused more on summarizing a lot of my own work on this topic, for this review paper I looked a lot more into the growing number of randomized experiments evaluating these policies in developing countries. Much of this literature is very new: out of the 24 RCTs I summarize results from in several tables, 16 were published in 2015 or later, and only one before 2011.

I focus on three main types of ALMPs: vocational training programs, wage subsidies, and job search assistance services like screening and matching. I’ll summarize a few findings and implications for evaluations that might be of most interest to our blog readers – the paper then, of course, provides a lot more detail and discusses more some of the implications for policy and for other types of ALMPs.

Weekly links March 17: Irish insights, non-working rural women, changes afoot in IRBs, and more…

David McKenzie's picture

Weekly links March 10: Ex post power calcs ok? Indian reforms, good and bad policies, and more…

David McKenzie's picture
  • Andrew Gelman argues that it can make sense to do design analysis/power calculations after the data have been collected – but he also makes clear how NOT to do this (e.g. if a study with a small sample and noisy measurement finds a statistically significant increase of 40% in profits, don’t then see whether it has power to detect a 40% increase – instead you should be looking for the probability the treatment effect is of the wrong sign, or that the magnitude is overestimated, and should be basing the effect size you examine power for on external information). They have an R function retrodesign() to do these calculations.
  • Annie Lowrey interviews Angus Deaton in the Atlantic, and discusses whether it is better to be poor in the Mississippi Delta or in Bangladesh, opioid addiction, and the class of President Obama.

Can you help some firms without hurting others? Yes, in a new Kenyan business training evaluation

David McKenzie's picture

There are a multitude of government programs that directly try to help particular firms to grow. Business training is one of the most common forms of such support. A key concern when thinking about the impacts of such programs is whether any gains to participating firms come at the expense of their market competitors. E.g. perhaps you train some businesses to market their products slightly better, causing customers to abandon their competitors and simply reallocate which businesses sell the product. This reallocation can still be economically beneficial if it improves allocative efficiency, but failure to account for the losses to untrained firms would cause you to overestimate the overall program impact. This is a problem for most impact evaluations, which randomize at the individual level which firms get to participate in a program.

In a new working paper, I report on a business training experiment I ran with the ILO in Kenya, which was designed to measure these spillovers. We find over a three-year period that trained firms are able to sell more, without their competitors selling less – by diversifying the set of products they produce and building underdeveloped markets.

Weekly links March 3: financial literacy done right, e-voting, private vs public schooling, and more…

David McKenzie's picture

Weekly links Feb 24: school spending matters, nudging financial health, cash transfers bring giant snakes and blood magic, and more…

David McKenzie's picture
  • On the 74 million blog, interview with Kirabo Jackson about the importance of school spending and other education-related discussion: “In casual conversation with most economists, they would say, “Yeah, yeah, we know that school spending doesn’t matter.” I sort of started from that standpoint and thought, Let me look at the literature and see what the evidence base is for that statement. As I kept on looking through, it became pretty clear that the evidence supporting that idea was pretty weak.” Also discussion on the need to measure things beyond test scores.
  • IPA has a nice little booklet on nudges for financial health – a quick summary of the evidence for commitment devices, opt-out defaults, and reminders.

The State of Development Journals 2017: Quality, Acceptance Rates, and Review Times

David McKenzie's picture
I recently became a co-editor at the World Bank Economic Review, and was surprised to learn how low the acceptance rate is for submitted papers. The American Economic Review and other AEA journals such as the AEJ Applied publish annual editor reports in which key information on acceptance rates and review times are made publicly available, but this information is not there for development economics journals.

Pages