Now the results for this aren’t quite ready for primetime (and I will blog on them when they are), but this got me thinking about “out of silo” impacts. These are significant impacts on domains which aren’t within the realm of the traditional use of the program. Other examples of this include Erica Field's work  on land titling in Peru which shows impacts far larger than your average family planning program on fertility and Eliana La Ferrara and co-authors' work  on soap operas which also shows fertility impacts.
It strikes me that these kind of results, which can be quite sizable, are doomed to low to no adoption on a larger scale. And the reason is a political economy one that stems from how government ministries and the international aid architecture are organized (the silo problem). As an example, take the transfer program I talked about above. The goal is social protection – to help people weather the bad times, get the kids off to a decent start, etc. So it is run by a social protection ministry and likely funded in part by departments of donor agencies concerned with social protection. Now when the president of this country asks the minister of industry how to boost small and medium enterprises, what does the minister say? He sure as heck doesn't say cash transfers because he knows that his ministry will not run the program. And his counterparts in donor agencies will do the same. The minister of social protection might pipe up (or might not) but she won't be wholly credible, not least of which because she is stepping on her colleague's turf.
This suggests we need more generalists in decision-making roles when it comes to defining a policy response to a particular problem. These need to be folks who know when to use a wrench and when to use a hammer. And they have to have enough power to overrule the specialists within ministries or departments. This is not to say we don't need specialists, but we do need generalists who can think outside of the silo.
A couple of other points on this. First, those specialists can and do learn. One interesting example of this is on family planning policy. If you read some of the more recent reviews (i.e. within the last decade) of reproductive health programs for example, you will see that they recognize the need for a combination of access to reproductive health care (the main focus of the older school view) but also education programs and economic opportunities.
Second, on the part of the team carrying out the evaluation, these out of silo results require one of two things: fishing or a deep and constantly updating knowledge of what is going on with the program. Since these are often unintended consequences, the only way the evaluators will find these effects is by running a lot of regressions or following closely how beneficiaries are reacting to a program. And the latter will require a fair amount of out of the box conversations with beneficiaries that aren't going to be picked up if the survey is bound by a strict definition of what the program is supposed to do. Now this obviously creates a tension with trial registries. But this which may be partially allayed if you can register closer to the endline than the baseline (see David's post  from Tuesday).
Third, when we do find these results, we need to do a careful job of understanding the causal chain. Since the connection wasn't obvious when the program was designed, this is going to require in-depth quantitative and qualitative work. But getting at the mechanisms driving these unexpected may not only make a clearer case for the out of silo intervention, but help make the within-silo interventions work better.
Look, we know in medicine that sometimes even drugs work better for something else. Heck, Viagra started life as a heart medicine. The same is true in development policy, we just need to keep our eyes out for those out of silo results, not be afraid to explore them, and then figure out a policy structure that lets us use them.