Syndicate content

Learning from the experiments that didn’t happen: Part I

David McKenzie's picture

In the spirit of learning from failure, we thought we’d start the New Year by discussing lessons from a series of attempts to implement randomized experiments that ultimately were not implemented. There is ongoing discussion about the need for a trial registry to ensure that all studies that are undertaken end up being reported on. Most of the attempted studies we will discuss here would not have even made it to the trial registry stage, but we think there are still important lessons to be learned from discussing the attempts. A new working paper provides more details.

Matching grant programs in Africa

The context is our attempts (together with Francisco Campos, Aidan Coville and Ana Fernandes) to evaluate matching grant programs in six African countries. Matching grant programs are one of the most common policy tools used by developing country governments to actively facilitate micro, small, and medium enterprise competitiveness, and have been included in more than 60 World Bank projects totaling over US$1.2 billion, funding over 100,000 micro, small and medium enterprises. They involve the government co-financing (typically 50 percent of) the costs of a firm purchasing business development services or undergoing quality improvement or technological upgrading.

Despite all the resources spent on these projects, there is currently very little rigorous evidence as to whether or not these grants spur firms to undertake innovative activities that they otherwise would not have done, or merely subsidize firms for actions they would take anyway. These programs typically cater only to a tiny fraction of the firms in a country, and the firms that self-select or are selected for these programs are likely to differ in a host of both observable and unobservable ways from firms that do not receive the funding. This is likely to lead to a positive bias in non-experimental evaluations if more entrepreneurial firms with positive productivity shocks are the ones seeking out the program, and a negative bias if it is better politically connected but less productive firms that receive the funding.

Since the programs involve essentially giving free money to individual firms, ex ante these programs seem good targets for randomized evaluation.

Selecting projects

In February 2010, the DIME-FPD initiative of the World Bank organized a 4-day workshop on impact evaluations in Dakar, Senegal, which brought together researchers, World Bank operational teams, and the key government counterparts in order to explain what is meant by impact evaluation, and to begin the process of building impact evaluations around key components of these projects. Through the course of these activities, we identified six projects - five of which were World Bank-funded - in which matching grants would be used. The seventh project comes from an engagement that started in 2008 with the Department of Trade and Industry (DTI) in South Africa to identify critical projects that should undergo an impact evaluation.

The number of grants anticipated in each country ranged from 60 to 1300, with grants ranging in size from an average of US$1,250 to an average of US$50,000 .

Planned Randomization Strategy

Given that the government is effectively giving away free money to firms, one might expect significant demand for this funding, resulting in the need for projects to select which firms receive it. Since we believe there is substantial uncertainty over which firms would best benefit from receiving these funds, our suggestion was for randomized evaluation based on an oversubscription design.

The idea was to make the matching grant programs open for all firms meeting certain basic eligibility criteria, and then randomly select which firms would be awarded the grants. In the event of more demand for the grants than the project could fund, this would provide a fair and equitable way of ensuring that all eligible firms received an equal chance of benefitting from these public funds, and might reduce concerns about political connectedness determining who receives the grants. We used the arguments made in an old blog post of David’s to try and make the argument that governments shouldn’t try and choose the firms they thought would be most successful, since what matters is which firms would have the biggest change as a result of getting the grants, which seems much harder to predict.

In one country where the program was already underway on a first-come, first-served basis, we tried an encouragement design. Markus details how this failed here: of the first 377 firms invited to an information event, only 61 showed up and only 18 signed up for the program. We therefore decided not to pursue encouragement designs in the other countries unless other options weren’t possible.

So what happened?

Of the seven projects, five initially agreed to implement the projects with an oversubscription design, and encouragement designs were planned in the other two countries.

But then:

·         In one country the project was cancelled after the government launched its own competing matching grant program with a higher percent of funds provided by the government.

·         Repeated program implementation delays led to turnover in government staff, with new staff in two countries not in favor of randomized selection; delays also brought us up against funding deadlines for the evaluations, giving us less leeway to be patient in order to allow the applications to accumulate.

·         The most common problem was lack of take-up: our randomization strategy was based on an excess number of applications, and we even supported additional marketing and TV/radio spots to try and boost applications, but in no country was there an excess number of eligible applicants.

So why can’t governments find enough firms to take large subsidies? Part II of the post on Wednesday discusses the reasons why, and the resulting lessons for other attempts to evaluate SME programs.

Comments

Submitted by Anonymous on
one thing that might be worth thinking about when designing surveys of small businesses - including so called encouragement design - is that they are busy trying to make ends meet and they are not interested in surveys. They do not want to be surveyed by Government and aid dononrs (even with, or especially with, 'breakfast and a movie'). This does not mean that they are not interested in the grants - provided that they do not come with such delay and with so many strings attached that they actually interfere with or harm the business (which seems to be common).

While I agree that small businesses can be reluctant to answer surveys, this was not a cause of the problems experienced here. The application forms for the matching grants contain the basic baseline data one needs, so a separate survey does not need to be taken. Then it is only follow-up surveys which need to be done - we did not get to this stage in our studies here. Your other point about the strings attached and delays certainly fits our experiences - Part II will discuss these in more detail

Add new comment