In some joint work with an African government, my colleagues Francisco Campos, Jessica Leino and I were trying to evaluate the impacts of one of their support programs for small businesses. This service was open to anyone who contacted them, but the number of entrepreneurs who knew about the program (and hence who used it) was low. Basically, the way the program worked was that when the entrepreneur came into the office and registered for the program, the implementing agency would assess the needs of the business and then provide the entrepreneur with subsidized access to a number of private sector business development service providers. Support ranged from technical help to marketing. The government was wondering about the return to this package of services, especially as it had higher subsidies for disadvantaged groups.
But randomization was off the table -- given the political need to provide services for small businesses combined with the fact that the service had been operating for a number of years as open to all, we couldn't come up with a way to randomize for evaluating the program as is. So, we decided to do random encouragement. Working with the agency's marketing people (who were also keen to improve outreach) we came up with a plan. We did a baseline survey of about 1000 small businesses (finding them was another story). We then selected 600 of these for "the outreach package." We chose 600 because this an approach which had worked with other programs from this agency, and although we wanted to leave some room for less than perfect uptake, we wanted to keep as much power as we could. The outreach package consisted of calling these folks up (we asked consent for this in the survey), and inviting them to breakfast and a movie. The movie was a nice short piece which explained the service and what people could get out of this. Staff from the agency would be on hand, not only to answer questions, but also to register people for the assessments which would get them started in the program. The agency followed up with people via SMS and calls to remind them of the event. We were going to do a number of these in movie theaters in major cities and towns so that the entrepreneurs wouldn't have to travel far. This sounded like a winner -- I mean, breakfast and a movie -- why not?
Well apparently, not. Of the 377 folks invited to the first three movie events, only 61 showed up. Of those entrepreneurs, only 18 signed up as clients. This up-take might have been higher if we had done more follow up after the events, but given the levels of attendance, we realized we weren't going to have enough power if we got 100 percent uptake, so we pulled the plug on further events.
The particular lessons we learned in this context were three. First, the requirements of a survey and what you need for encouragement to attend an event are somewhat different -- a survey comes to the individual's business, here we wanted folks to come to us and we needed to call them, repeatedly and on all possible numbers. Second, transport costs, which we know matter in general, really mattered here -- even if the events were local. Third, there was low awareness, and likely some skepticism of government delivering this service (although not extreme skepticism as I'll explain in a minute). More generally, in discussing this with other people trying to do encouragement design in private-sector types of interventions, this seems to be a common experience -- it's hard to encourage entrepreneurs into a program.
The epilogue to this story is that, after extensive discussions with the implementing agency, we decided to switch the intervention. Now, the field staff from the program are reaching out directly to entrepreneurs from the survey sample and then going out to the field to do the assessments for those who agree (again we're starting with a target of 600). Uptake is much higher, and we're likely to end up in a place where we can say something meaningful about the program. But the program now is different -- this more proactive model isn't what the government had in place originally. Part of our discussions with the agency was about precisely this -- did they think this would be a program model they could and would continue? The answer to this was yes, which was key for us because evaluating a variant of the original program that was not sustainable didn't strike the government or us as good use of resources in this context.
Anyhow, this whole experience (my first with encouragement design) has made me much more cautious of using this as a method. Have you had positive or negative experiences with encouragement? I'd love to hear about them, particular those that worked without making the program into something it wasn't...