Busia is the project home in Kenya of the Dutch NGO International Child Support (ICS), and is home of many of the first randomized experiments in development economics. ICS was the partner in the famous Miguel and Kremer deworming study, as well as to a whole host of RCTs aimed at increasing schooling - including providing free textbooks, free school uniforms, incentives for teachers, and teacher training – as well as a partner in Pascaline Dupas’ sugar daddies paper. A fascinating new working paper by Eric Roetman describes the history of ICS’s involvement in randomized trials, and how they have moved away from evaluating their projects using randomized experiments to an alternative (and in my opinion somewhat hokey) process.
The collaboration began in 1995, where ICS chose a selection of projects from its program in West Kenya, with the idea that by evaluating these projects, better informed decisions could be made. Since the external researchers organized the funding, including financial support for ICS to cover the extra costs of the evaluations, this was initially seen as a win-win situation.
But then Roetman describes two issues which arose. First, ICS was a small NGO, and after this initial success the programs that ICS began to implement, and the way in which they were implemented, seemed to be increasingly driven by the demands of evaluations:
“At its peak in 2004, ICS’ branch in Kenya had a budget of about 1.2 million euros to spend on programmes and ICS had more than 80 people on its payroll, of which 70 were involved full time with evaluations….In the following years, an increasing number of evaluations were organized within ICS. As a result, the evaluations became harder to manage. Bigger samples had to be taken; more schools and communities had to be involved. ICS had to expand its programme. And, on top of that, information had to be collected in the schools and communities that acted as control groups. The funds for this expansion were drawn by external researchers. This made it more difficult for the management to strictly separate the means and management of programmes and evaluations. The line between programmes and evaluations began to blur.
It had not been a problem to gain funds for new impact evaluations. But what increasingly became a problem was the identification of new subjects for the evaluations, that is new projects to evaluate. As a result, there was a lot of pressure on the 1.2 million budget to keep initiating new types of projects that were interesting to research. The choice of research questions had implications for programmatic choices and vice versa. As more evaluations were done, it became harder to fit these projects neatly into ICS’ program agenda.”
His claim is that ICS has become a development lab – doing short-term interventions and evaluating the results, leaving it to others to scale-up what was successful. Roetman notes the flexibility to do this is one advantage of an NGO, noting that it is much easier for NGOS (to select participants randomly) than it is for governments, because NGOs are not expected to serve everybody.
The second issue Roetman describes is then that a change in board occurred, and the new board was focused on participatory programs. Indeed, ICS’s website now describes its mission as ICS aiming “to be one of the stakeholders in processes of change that are initiated by children and communities themselves”. The new board thought that rigorous impact evaluations hindered genuine participation: people in schools and communities had too little influence on the design of the projects. Instead of asking people to participate in ICS projects, they wanted it to be the other way around, strengthening existing local initiatives. This was despite negative results from one randomized evaluation of previous ICS attempts to strengthen local women’s groups.
As a result, according to the article, ICS no longer does randomized evaluations of its projects, and instead has adopted “Social Return on Investment (SROI)”. According to the author, in this method the economic, social, and environmental costs and benefits of a project are compared and valued – key stakeholders are asked to attribute values to the economic, social, and environmental costs and revenues of a project, which are then compared. This method is relatively new, but already has its own Wikipedia page, and no doubt an invested army of supporters. It is not at all clear how it comes up with a counterfactual though, and the reliance on perceptions of effects seems problematic.
ICS’s case certainly shows the challenges a small NGO faced in instituting rigorous evaluation on a widespread basis, and some of the concerns about letting evaluations drive programs. On the other hand, I would argue that having the free insights of development research leaders when designing programs should certainly be something NGOs working on development should also welcome. Moreover, ICS certainly is a lot more famous today than it otherwise would be because of these evaluations.
Overall, my feeling is that ICS has played an important demonstration role, and hopefully helped convince more NGOs and especially larger Government and IFI projects to undertake impact evaluations. Ultimately the results of the ICS studies have been very useful, but it will also be good to see less from Busia and more from the rest of the world.
Note: I was alerted to this paper through a post at Monitoring and Evaluation News.