This post was originally published on the Brookings Future Development blog series as Can government help the unemployed find work?
Active labor market programs (ALMPs) like job matching, training, wage subsidies, start-up support, and public works for the unemployed have a less than stellar reputation. “Ineffective,” “a ”charade,” and “a waste of money” are common labels one hears when discussing ALMPs; and even when positive effects of ALMPs are acknowledged, the sizes of these effects are portrayed as too small to bother. At the same time, these programs are widely used, not only in high-income countries, but also in many developing countries—often with the hope that they solve many labor market problems, in particular, unemployment. Are policymakers wrong to pursue these programs?
What does the evidence tell us? Are ALMPs effective? The short answer is yes, they have a seemingly small, but significant positive impact on the likelihood to find work. This is what a recent so-called “meta-study” finds, summarizing the evidence of 207 different evaluations. (The meta-analysis codifies results with 857 effect estimates from 47 countries, and then tries to identify common patterns.) Crucially, the evidence suggests that the impact of ALMPs depends on the time horizon one looks at. The effects become much larger with longer time horizons. In the short run, program participants only have a 1.6 percentage point higher probability of finding work within the first year when compared to non-participants. For example, if out of 100 non-participants, 30 had a job after one year, 31.6 had a job among the participants—a small difference indeed. However, in the medium term, between one and two years after program completion, the difference grows to 5.4 percentage points, and after two years, to 8.7 percentage points. So, if after two years, 50 out of 100 non-participants had a job, 58.7 had a job among the participants—a much bigger difference than within the first year.
Figure 1 plots the effect estimates (and their confidence intervals) by program for the short, medium, and long term. Clearly, one can observe a shift towards higher impact when going from short to long run. The various programs are “stacked” above each other, ranked from the programs with the least impact at the bottom all the way up to the program with the highest impact at the top. The red diamond represents the actual effect estimate for that program, and the blue line the confidence interval around the point estimate. For example, the least well-performing program for the short term estimated an average effect of about -0.06, corresponding to a negative effect of about 6 percentage points. The confidence interval for that estimate ranges from about -0.11 to -0.01, making this estimate statistically significant because it is entirely left to zero. On the other hand, the best-performing program in the short run had an estimated effect of more than +0.2 (20 percentage points), with a confidence interval ranging from about 0.1 to 0.3 (also statistically significant as the confidence interval is entirely to the right of zero). As one goes from short to long run, one can observe how the estimates and their confidence intervals start moving towards the right, implying that programs have higher impact in the long run.
Figure 1. AMLPs: Long-term effects are much larger than short-term gains
It’s important to keep in mind that the relative size of these effects crucially depends on how well the control group does in the labor market. These relative effects can actually be quite large: Even a 1.6 percentage point increase in the short term can be quite large if in the control group only eight people had found work within the first year, implying a 20 percent higher success rate among the treatment group. In fact, 20 percent is about the average relative effect the study finds in the data.
There are several interesting patterns underneath these averages. First, not all types of programs have a bigger impact over time. Job search assistance and job matching services typically have large short-term effects, which then do not increase further or fade out in the longer run. This empirical pattern matches how these programs work: They try to get people into jobs quickly. Job training programs, on the other hand, typically have very small or zero short-term effects, but large and sustained effects in the long run, due to the human capital investment they comprise. Importantly, training programs seem to work particularly well for the long-term unemployed. Second, the effects do not seem to differ substantially between high (HICs) and low or middle-income countries (LMICs): Short-term effects are on average even larger in LMICs than in HICs (4.6 vs. 0.7 percentage points), but virtually of the same size in the medium run (5.5. in HICs, 5.3 in LMICs) and long run (8.7 HICs, 8.5 LMICs). Third, the study finds no indication that experimental or non-experimental study designs produce different results.
On average, these programs have a significant positive effect—but sometimes the effect may seem small. Yet, one must bear in mind that these effects also depend on the general labor market conditions: If few people are able to find work, even a small effect can be meaningful and, in fact, can be quite large in relative terms. Importantly, the effects, especially of training programs, substantially increase in the longer run. That is, the investments into ALMP tend to pay off in the long term, but are often not visible immediately. This time pattern is also important for the assessment of ALMPs themselves: In order to get a complete picture, these longer-term effects need to be taken into account when evaluating programs. This would ultimately also paint a less sobering and more optimistic picture, in particular considering that different program types can be successfully matched to different participant groups.
Join the Conversation