Despite the large and growing literatures on migration in economics, sociology, and other social sciences, there is surprisingly little work which actually evaluates the impact of particular migration policies (most of the literature concerns the determinants of migrating, and the consequences of doing so for the migrants, their families, and for native workers). I am therefore always interested to see new work in this area, particularly work which manages to obtain experimental variation in policy implementation. A new paper by Pernilla Andersson Joona and Lena Nekby, forthcoming in the Scandinavian Journal of Economics, looks at the important issue of policy efforts to help new immigrants find work (Journal version , ungated earlier version  that is missing the cost-benefit discussion). For those of you less interested in migration, there are also some interesting methodological points I thought I’d discuss about the challenges of doing experiments with governments.
The context is programs targeted at newly arrived immigrants in Sweden, which target refugees as well as tied movers who arrive to follow a main applicant. The programs had had a poor record of getting immigrants into jobs, with only 30% of males and 20% of females regularly employed three years after participating in the programs.
The intervention was a pilot program which provided intensive counseling and coaching by Public Employment Service caseworkers with considerably reduced caseloads. The coaching was aimed to help immigrants find, apply for, and secure unsubsidized employment, and/or help them better access appropriate other active labor market programs. The caseworkers for the treatment groups handled 35-40 cases each month, compared to 200-250 cases/month in the control group. Participation in the treatment was for a maximum of one year, with treatment participants on average meeting the caseworker 17-18 times over the year. Treatment group size is 953 immigrants, control group size is 335 immigrants.
How was the randomization done?
There are a couple of issues which point to the difficulties of getting governments to implement policy experiments correctly, even when there is some willingness to trial a pilot program. The first is external validity: The trial introduction program was introduced within 3 Swedish counties in October 2006 (we are not told why these counties were chosen, although Stockholm is one of them). Within these counties, 22 municipalities participated in the trial program (we aren’t told whether this is all the municipalities in these 3 counties, or whether some opted out entirely). Then only 9 of the 22 municipalities that agreed to implement the program agreed to do so experimentally. The paper contains no comparison of those that did and didn’t, or discussion of why some municipalities didn’t want to – but this is an issue that also arose in many of the U.S. job training programs.
The second issue is one of method of randomizing. One issue, common to many such programs, is that participants enrolled continuously, making centralized control over randomization to be deemed not feasible. Instead, local office managers were told to identify eligible participants, print the first page from the public employment registry for each worker, place these pages face down and shuffle them around, and then pick randomly participants for treatment (intensive coaching) and control (the regular introduction program). Such random assignment was to occur at regular intervals, which varied across locations and appear not to have been well recorded. One might suspect there could be some deviations from pure randomness in such an approach, and indeed a table of comparison of means shows 16/32 pre-treatment variables to differ significantly between the treatment and control groups at the 10% level or lower, and an F-test of joint significance rejects the null of orthogonality of treatment status to pre-treatment characteristics of the immigrants. Now, given how new these migrants were to the labor market, the local offices most likely didn’t know very much about them, and it isn’t clear whether they had any incentives for systematic biased assignment, but one is still concerned about whether this was just a very unlucky draw, or reflects non-adherence to randomization.
Outcomes: all the outcome data come from the administrative database, which records whether individuals are in regular unsubsidized employment, subsidized employment, regular education, and labor-market training. This has a couple of nice advantage – first, it is measured multiple times, so the authors can look at impacts at 200 days and 365 days after registration; and second, there is no attrition. However, the paper doesn’t discuss the accuracy of this data, or whether there are incentives for people to misreport employment status – I assume being Sweden that formal employment status is well recorded, so what we might miss are informal employment activities.
Results: At the end of the year program, treatment participants are 3.2 percentage points more likely to be regularly employed than the control group and 9.6 percentage points more likely to be in labor-market training. Since only 7.2% of the control group are in regular employment at the end of 1 year and only 2.4% in regular training, this is a sizeable relative increase. When they look at treatment heterogeneity, the main effect seems to be for men, who are 72% of the sample – the authors don’t test to see whether the difference by gender is statistically significant or not though – so the lack of effect for women might just reflect a lack of power with the small female sample.
Cost-benefit: while the relative increase is large from treatment, the absolute treatment effect is still small – the entire trial in all 22 municipalities involved 3,111 immigrants, and based on the experimental estimate, would lead to 186 more individuals being employed at the end of it. The total cost of this trial was approximately US$12.2 million – or about $65,500 per job created! The question is then how you value the jobs created – since they don’t have wage data on their participants, the authors use registry data on newly arrived immigrants from Asian countries (the main source of trial participants) which is about $23,700. So on this basis, the costs greatly exceed the benefits unless the benefits persist for many years and discount rates aren’t too high.
A couple of thoughts on this calculation – first, another benefit is the Government not having to pay for benefits or any other services to these immigrants once they have jobs – I’m not sure what Sweden provides in this case. On the other hand, assuming that the benefit is the total wage paid to the migrant ignores any crowding out of jobs, and from a fiscal viewpoint, ignores that most of cost is to the state while the benefit is to the worker and firm hiring him or her. This is a tricky thing to value – as noted by Alaka in her recent post  on cost effectiveness versus cost-benefit.
My bottom line on this paper is that it is exciting to see such policies actually getting tested rather than just blindly being implemented – the trial program was phased out in June 2008, although the paper doesn’t make clear how much the study’s results had to do with this decision. For those of you thinking of implementing policy interventions with Governments, this is a nice example to show how it can be done – but also to point out some of the potential pitfalls to try and avoid in implementation.