Published on Development Impact

Finally a matching grant evaluation that worked…at least until a war intervened

This page in:
Several years ago I was among a group of researchers at the World Bank who all tried to conduct randomized experiments of matching grant projects (where the government funds part of the cost of firms innovating or upgrading technology and the firm the other part). Strikingly we tried on seven projects to implement an RCT and each time failed, mostly because of an insufficient number of applicants. The resulting paper documenting the reasons for failure I think has been useful in questioning the way the World Bank implements some of these programs, and attracted some policy attention as a result.

The Eighth Time is the Charm?
So after 7 failed attempts, I was asked if I wanted to help in the evaluation of a new matching grant project. The project took place in the Republic of Yemen in the aftermath of the Arab Spring, and provided firms with a matching grant of up to $10,000 as a 50 percent subsidy towards the cost of business services like finance and accounting systems, website creation, training, marketing, participation in exhibitions, and some associated goods such as office and IT equipment. Crucially the project attempted to take heed of the lessons from the failed evaluations: eligibility criteria were kept broad; the application form was not complex and could be done either online or in paper; and the program was well-advertised.
A new working paper (with Nabila Assaf and Ana Paula Cusolito) documents the short-term impact of this project. I’ll highlight some points of interest from the impact evaluation side:
  1. An oversubscription design finally was feasible: The combination of an easy application process and the need among firms for the program meant that a lack of applicants was not an issue in Yemen. In total 820 applications were received, slightly four times the number of grants available for the first round (200). Firms were selected for the program from among the eligible applicants in public randomization events held in Sana’a on January 9, 2014 and Aden on January 12, 2014. A reserve list was chosen in case originally selected firms withdrew from the program, giving a treatment group size of 216 firms. We chose a random sample of 200 of the applicants to be the control group for the purpose of follow-up surveys (budget and logistics preventing us from tracking all non-applicants).
  2. The downside of wide applicability was huge heterogeneity: firms had a median size of 5 paid employees, but range from none to 950, with a mean of 14.8 and standard deviation of 53. There is similarly huge heterogeneity in sales. The result (along with item non-response on sales) means we have no power to measure impacts on financial outcomes and employment. We planned to boost sample size through a second year of the program, which was canceled due to the outbreak of civil war.
  3. These grants have additionality for innovation: a key policy concern for these programs is whether they cause firms to undertake additional innovative activities beyond what they would do anyway, or whether they are just resource transfers that don’t change firm behavior. Our follow-up survey finds this to be the case in the year after applying: firms receiving the grants were 30.3 percentage points (p.p.) more likely to introduce a new product, which is more than a doubling of the 26.5 percent rate in the control group; they do more marketing (19.8 p.p. increase), introduce new accounting systems (48.5 p.p. increase), and do more worker training (33.2 p.p. increase) with the grants – in each case approximately doubling the control group rates.
  4. Surveying when a civil war is impending: In August 2014, part-way through this program, the rebel Houthis began demonstrations in Sana’a against increased fuel prices, and in September they took control of the city of Sana’a. A U.N.-brokered peace agreement was made in which they agreed to withdraw once a national unity government was formed. The situation worsened at the start of 2015, with the Houthis seizing control of the state television, President Hadi resigning and fleeing to Aden, and civil conflict breaking out in late March and early April 2015.
    We had planned to do a follow-up survey later in 2015, but with these events unfolding, brought forward the survey to attempt to get some data before this became impossible. This resulted in a lot of compromises to be able to get some data: i) we had to go to phone surveys instead of in-person, because security problems and gas shortages made it not possible to go in person; ii) we then had to shorten the questionnaire and focus on a few key more immediate outcomes; iii) we had to live with higher than usual attrition - 51% in the control group and 41% in the treatment group. Luckily the baseline characteristics still are similar for the two groups in the sample that answered. The start of airstrikes and suicide bombings meant it was not possible to do further chasing down of the non-respondents; iv) getting data out quickly – we were afraid the survey data would get destroyed and so pressed to get it as soon as possible after collection, rather than waiting for translation and cleaning. Within days of us receiving the data an airstrike hit the offices of the survey company at night (luckily no one was there).
So obviously the results come with even more caveats than usual – they are short-term, with attrition, and with less power than we would like. But given the dearth of evidence about the effectiveness of these programs, we think the impact evaluation was still useful: it demonstrates the feasibility of doing an RCT for these types of programs; finds strong evidence that the grants generated additional innovative activities; and that they can have impact even in a fragile state environment.

Bonus: Also some evidence on the impact of youth internships
We also have a second new working paper that reports on a second intervention that took place in the same project – providing youth with internships. Receiving an internship resulted in an almost doubling of work experience in 2014, and a 73 percent increase in income during this period compared to the control group. A short-term follow-up survey conducted just as civil conflict was breaking out shows that internship recipients had better employment outcomes than the control group in the first five months after the program ended. The same surveying issues arose as above, but it was easier to survey the intern applicants than firms – with response rates of 78% for treatment and 80% for control.


David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000