Jobs or the lack thereof dominate policy discussions around the world, with Governments everywhere facing a shortage of evidence as to which programs work in generating new employment and in helping particular groups of the unemployed find new jobs. I spent part of last week at the NBER summer institute, where papers in the Labor Studies and Entrepreneurship sessions were focused on employment.
Over at labor studies, David Autor and Sari Kerr presented results from an evaluation of the Work First program in Detroit. This program provides job search and job placement help to the unemployed – including a one week course on improving job application skills, and the use of private contractors who try and place individuals into temporary help jobs or direct employment. Contractors vary in the types of jobs they place people into, and are rotated frequently – so that the contractor an unemployed individual is matched with is determined by the application date. This is used by the authors as an instrumental variable, and they find positive impacts on earnings of placing people into direct employment, but no effects of temporary help jobs.
One interesting feature of this paper is that it attempts to go beyond average effects by employing an instrumental variables quantile regression (IVQR). This technique has been around a while (Chernozhukov and Hansen have a 2005 Econometrica paper) and I have used it myself on one occasion. If its assumptions hold, then it can tell us not only about the average effect, but the distribution of treatment effects, which is a common question in many evaluations. However, this technique doesn’t seem to get used that much – and indeed my read of the crowd at the NBER was that there were certainly a number of skeptics in the audience who felt that heterogeneity would be better to look at through interactions of treatment effects with baseline variables. The key assumption required for IVQR to work is that individual unobservables are rank invariant to potential treatment status – so that, for example, the same people who have relatively higher earnings without getting the treatment are the ones who would have relatively higher earnings with the treatment. This assumption doesn’t seem to be plausible to a number of people it seems- which might explain the relative lack of use of this method.
A second interesting paper at labor studies was new survey work by Alan Krueger and Andreas Mueller looking in-depth at how the unemployed search for, and find jobs. They conducted a weekly survey of the unemployed in New Jersey for up to 24 weeks, measuring the intensity of time people spend looking for jobs, the reservation wages they say they will accept, and other elements of job search behavior. One interesting finding is that the amount of time devoted to job search drops rapidly with time spent unemployed- so failure to find a job quickly leads to discouragement. It would be great to see such detailed data collected on job-search behavior in a developing country context. One concern with the Krueger paper is that the survey was done on-line, and the initial response rate is only 10%, with people who answer the first survey then only answering 40% of the weekly follow-ups. Despite this, the authors argue that the use of rich administrative data and reweighting for sampling non-response allows valid inferences to be made – which if you believe it is good news for those of us struggling with trying to survey hard to survey populations.
Over at entrepreneurship, Rob Fairlie, Dean Karlan and Jon Zinman presented preliminary results from a long-term evaluation of the Growing America through Entrepreneurship (GATE) business training program in the U.S. This is a quite large randomized experiment, with 4200 participants getting randomly allocated into equal groups of treatment and control – with the treatment groups getting classroom training and individual counseling on setting up and running a business. A very nice feature of the study is that follow-up surveys were conducted at 6 months, 18 months, and 60 months, allowing short and medium-term outcomes to be measured. The program seems to have sped up the rate of business start-ups, especially among the unemployed, but by 5 years those who participated in the program were not any more likely to be owning a business, and the program showed no significant impacts on sales, employment, or other business outcomes. Fairlie at all look carefully at heterogeneity of impacts, attempting to test whether particular market failures might justify the program for particular groups, but don’t find much in the way of heterogeneity of effects.
It wasn’t all U.S. focused – I presented preliminary results of an attempt to formalize firms in Sri Lanka that I will blog about when we have more data collected, Bilal Zia presented on business training in Bosnia, and Nick Bloom presented on our joint work on improving management in India. Even in these studies, where there are some sizeable impacts on business outcomes like profits and productivity, there isn’t any significant impact on employment in the short-run. The elusive search for ways to generate more employment continues…