Syndicate content

Weekly links March 24: why those of us in our 40s matter so much, an ALMP program that may be working, more CSAE round-ups, and more…

David McKenzie's picture

The Iron Law of ALMPs: Offer a Program to 100 People, maybe 2 get jobs

David McKenzie's picture

I have just finished writing up and expanding my recent policy talk on active labor market policies (ALMPs) into a research paper (ungated version) which provides a critical overview of impact evaluations on this topic. While my talk focused more on summarizing a lot of my own work on this topic, for this review paper I looked a lot more into the growing number of randomized experiments evaluating these policies in developing countries. Much of this literature is very new: out of the 24 RCTs I summarize results from in several tables, 16 were published in 2015 or later, and only one before 2011.

I focus on three main types of ALMPs: vocational training programs, wage subsidies, and job search assistance services like screening and matching. I’ll summarize a few findings and implications for evaluations that might be of most interest to our blog readers – the paper then, of course, provides a lot more detail and discusses more some of the implications for policy and for other types of ALMPs.

Weekly links March 17: Irish insights, non-working rural women, changes afoot in IRBs, and more…

David McKenzie's picture

What’s new in education research? Impact evaluations and measurement – March round-up

David Evans's picture

Here is a curated round-up of recent research on education in low- and middle-income countries, with a few findings from high-income countries that I found relevant. All are from the last few months, since my last round-up.

If I’m missing recent articles that you’ve found useful, please add them in the comments!

A pre-analysis plan is the only way to take your p-value at face-value

Berk Ozler's picture

Andrew Gelman has a post from last week that discusses the value of preregistration of studies as being akin to the value of random sampling and RCTs that allow you to make inferences without relying on untestable assumptions. His argument, which is nicely described in this paper, is that we don’t need to assume nefarious practices by study authors, such as specification searching, selective reporting, etc. to worry about the p-value reported in the paper we’re reading being correct.

Weekly links March 10: Ex post power calcs ok? Indian reforms, good and bad policies, and more…

David McKenzie's picture
  • Andrew Gelman argues that it can make sense to do design analysis/power calculations after the data have been collected – but he also makes clear how NOT to do this (e.g. if a study with a small sample and noisy measurement finds a statistically significant increase of 40% in profits, don’t then see whether it has power to detect a 40% increase – instead you should be looking for the probability the treatment effect is of the wrong sign, or that the magnitude is overestimated, and should be basing the effect size you examine power for on external information). They have an R function retrodesign() to do these calculations.
  • Annie Lowrey interviews Angus Deaton in the Atlantic, and discusses whether it is better to be poor in the Mississippi Delta or in Bangladesh, opioid addiction, and the class of President Obama.

Can you help some firms without hurting others? Yes, in a new Kenyan business training evaluation

David McKenzie's picture

There are a multitude of government programs that directly try to help particular firms to grow. Business training is one of the most common forms of such support. A key concern when thinking about the impacts of such programs is whether any gains to participating firms come at the expense of their market competitors. E.g. perhaps you train some businesses to market their products slightly better, causing customers to abandon their competitors and simply reallocate which businesses sell the product. This reallocation can still be economically beneficial if it improves allocative efficiency, but failure to account for the losses to untrained firms would cause you to overestimate the overall program impact. This is a problem for most impact evaluations, which randomize at the individual level which firms get to participate in a program.

In a new working paper, I report on a business training experiment I ran with the ILO in Kenya, which was designed to measure these spillovers. We find over a three-year period that trained firms are able to sell more, without their competitors selling less – by diversifying the set of products they produce and building underdeveloped markets.