There are a multitude of government programs that directly try to help particular firms to grow. Business training is one of the most common forms of such support. A key concern when thinking about the impacts of such programs is whether any gains to participating firms come at the expense of their market competitors. E.g. perhaps you train some businesses to market their products slightly better, causing customers to abandon their competitors and simply reallocate which businesses sell the product. This reallocation can still be economically beneficial if it improves allocative efficiency, but failure to account for the losses to untrained firms would cause you to overestimate the overall program impact. This is a problem for most impact evaluations, which randomize at the individual level which firms get to participate in a program.
In a new working paper, I report on a business training experiment I ran with the ILO in Kenya, which was designed to measure these spillovers. We find over a three-year period that trained firms are able to sell more, without their competitors selling less – by diversifying the set of products they produce and building underdeveloped markets.
A while back I blogged about work using active choice and enhanced active choice to get people to get flu shots and prescription refills. The basic idea here is that relatively small modifications to the way a choice is presented can have large impacts on the take-up of a program. This seemed useful in the context of many of our training programs– attendance rates averaged 65 percent in a review of business training programs I did with Chris Woodruff. Therefore for an ongoing evaluation of the GET AHEAD business training program in Kenya, we decided to test out this approach.
Last week I blogged about a paper that David wrote with Chris Woodruff which takes stock of the existing evidence on the impact of business trainings. The bottom line was that we still don’t know much. Part of the reason is that these types of evaluations are not straightforward to do – they have some pitfalls that you don’t always find in your garden variety impact evaluation. So to
What do we really know about how to build business capacity? A nice new paper by David McKenzie and Chris Woodruff takes a look at the evidence on business training programs – one of the more common tools used to build up small and medium enterprises. They do some work to make the papers somewhat comparable and this helps us to add up the totality of the lessons. What’s more, as David and Chris go through the evidence, they come up with a lot of interestin
There are now a variety of well-known experimental and non-experimental methods that economists use to learn whether a given program works or not. However, our tools for learning why or why not something works are much more limited.