Randomized program implementation is currently seen as the ‘gold standard’ for impact evaluation in the search for the most effective development interventions. Earlier studies were criticized for their limited scope, so some of these interventions now involve large populations. Unfortunately, the larger the intervention, the larger is the danger that people who were supposed to get the treatment do not receive the intervention and vice-versa. Do such deviations invalidate the conclusions drawn from randomized studies? My colleague Harold Alderman together with Harvard economist Sebastian Linnemayr address this question in a paper  that evaluates the impact of a randomized nutrition intervention on the anthropometric status of Senegalese children. The paper confirms that the measured impact is stronger in villages that actually received the intervention compared to those that should have received it. This confirms the view that large scale community based health promotion can get parents to take better care of their children—and that children will benefit from this. The paper illustrates that randomization, even though it was not strictly followed, still assists in identifying the program’s impact. Given the rapidly increasing number of large-scale randomized interventions, more studies addressing this sort of questions are needed. The most valuable lessons will come from studies that confront these difficulties rather than ignoring them. The real world differs fundamentally from the laboratory setting in which the method of randomized experiments first was developed. It is time to address this fact in order to learn real lessons from and for real people.