In our post on Monday, we discussed a paper (joint with F Campos, A Coville, and A Fernandes) where we lay out our failures to evaluate a number of matching grant programs in Africa. In terms of background, for those of you who missed it, matching grant programs are government co-financing (typically 50 percent) for the costs of a firm purchasing business development services or undergoing quality improvement or technological upgrading. We undertook seven evaluations -- and all of them have pretty much failed as experiments (although a couple are still hoping to do non-experimental evaluations). There are three main proximate causes why these evaluations went awry: 1) some governments decided not to randomize (sometimes reaching this conclusion after management changes), 2) insufficient numbers of applicants to allow for randomization, and 3) implementation delays.
Behind these proximate causes are five underlying causes -- which may provide some insight into how we can do this better in the future:
First off are political economy issues. Aside from the usual "we're not quite sure if we want to know if this works" source of resistance, we encountered two other types of political economy issues when it came to matching grants. First, capture is likely to be more acute than with more diffuse programs. Keep in mind that the main intervention here is basically free money for a firm to use for investments. As such, various actors are likely to try and skew the selection criteria to their constituency. The second political economy manifestation surrounded electoral politics. This is best described by one of the examples we discuss in the paper -- in one of the countries we worked in, we had agreement on a randomized design, with a fairly high level of political buy-in. But then there were riots followed by a cabinet reshuffle. The new minister for industry decided that now this pot of funds would be key for the newly revised industrial strategy. Randomization was then out of the question.
The second set of underlying causes comes from the program eligibility criteria. Basically, these criteria tend to be skewed heavily towards larger formal forms -- and in many African countries, there are not that many largish formal firms which are keen to apply for these kind of programs. Keeping these criteria strict is one of the tools groups like Chambers of Commerce use to help capture the programs for their members.
The third group of factors comes from last mile delivery issues of the program. A lot of effort goes into designing the eligibility criteria for these programs but less into marketing. However, even when there is communication, it needs to be targeted at those who are eligible -- which may be a narrow slice. In one survey discussed in the paper, of 209 firms surveyed one year after the launch, none of them had heard of the program. When followed up, 39% of the firms were interested, however only 6% were both interested and eligible. Another delivery issue is the tension between government oversight and firms’ capital constraints. The government (understandably) wants to make sure that the funds are not misused. This often leads to them asking the firms to put up the money and be reimbursed later. However, given that these grants are designed, in part, with capital constraints in mind, this leads to a disincentive for firms to participate.
A fourth set of factors comes from the incentives facing project staff. Given the non-trivial application processes for these grants, each additional application means more work for the program staff. As such, they would rather not see a bunch of extra applications.
Finally, we faced problems with matching funding cycles for evaluations to the life cycle of the project. As indicated above, there may be issues which constrain demand for these types of programs. As such, the first round of funding usually ends up being a trial period (on the part of both the program and potential applicants). However, a lot of the funding we got for this work came with a two or maybe three year funding horizon. And when the early program participation was weak, this made getting additional funding hard.
Now, as a group, we saw some of these risks ex ante. But clearly, the mitigation techniques we put in place weren't enough. This leads us to 6 things to think about:
· Be more realistic about the time it takes to implement these programs -- for the evaluators, but also for program staff and evaluation funders
· Shift the programs from picking winners to picking positive treatment effects. These programs try to target gazelles -- which, if we could do easily would solve our funding problems. Maybe these programs could aim a bit more broadly. What should matter for governments is targeting firms that will grow the most as a result of their programs, which are not necessarily the firms that will grow the most overall. Lots could be learned from the heterogeneity of impacts that could be used to improve these programs
· Make it easier for firms to apply. Those with higher potential may not be the long established, formal, larger firms.
· Shift techniques to deal with the potentially small sample. As David has discussed in a paper, more follow up data collection will help. As will increased firm homogeneity.
· Maybe shift to asking more program design questions rather than average impact. It may be worth tackling questions such as different modalities for reimbursement, different levels of matching funds and the like when overall impact remains elusive.
· Think about the methodology for capturing innovation. A lot of innovation fails and here we may be interested in the relatively small number of significant success stories. This is going to require going beyond the simple average treatment effect.
So these are a bunch of lessons we've learned from our recent failures with this type of program. Does anyone else want to share lessons -- positive or negative?
Join the Conversation