Syndicate content

Add new comment

Submitted by Berk on

Hi Heather,

Great comments - thanks.

I think we're in agreement on your first point. If you have a pretty good idea that someone needs your intervention now rather than later, you might just exclude that group from the study and treat them immediately. This is subject to the caveats about knowing need by David and Markus (Markus has a particularly nice point about his study on the effects of ARVs on productivity -- you should check it out). As a side note, I think of this as less in the sense of equipoise and more 'do no harm.' If a program is an emergency aid or a palliative program, its goals are different than programs trying to reduce future poverty: the former kind (akin to Markus' quadrant in the 2x2) should, in most cases, not be subject to randomization unless under severe rollout constraints.

I think that the issue of program goals in economics is important and speaks to your second point about going through with phase II. In economics, unlike in medicine, many times the programs we have involve transferring something to individuals, households, or communities (assets, information, money, etc.). Without negative spillovers, we don't think of these as ever not increasing individual welfare, at least temporarily: If I give you a cow, this is great for you. If you don't like it, sell it: your individual welfare will increase (would have been even higher if I just gave you the cash). But, what if my program's goal is not a temporary jump in your welfare, but you escaping poverty as close to permanently as possible? The program could be deemed unsuccessful even though it raised welfare of its beneficiaries for a short period.

The point is, it does seem wrong to break your promise to give something (something people would like to have) to people who drew Phase II in the lottery because you deemed your program unsuccessful for reaching its goals. You promised people that you'd give them the treatment at the outset, so I'd argue that if you'll break your promise you have to give them something at least as good if not better. If you can come up with this (and the phase II group is happy with your decision), perhaps they can even become your phase I group in a new experiment -- in a process where you experiment, tweak, experiment again, … Kind of like what Pritchett et al. argue we should do: lot more experiments not less…

Thinking of your examples. With the Oregon healthcare reform, it would be hard to push a stop or pause button with legislation. Government action takes time and there is the credibility of your policymakers at stake. I don't think you could really argue for a stop/pause because those impacts (even if unequivocal) are considered too small to treat the lottery losers. In the case of a project that is giving cows, I am more optimistic: it might be possible for the project to find an alternative treatment that is of equal or higher value, that is acceptable to the phase II group, and that is feasible to roll out quickly. In such cases, I could see a tweak of the intervention between the two phases.

Finally, this actually reminds me of an issue that I have been thinking about recently -- regarding data collection plans/phases rather than program phases. Many times, we have funds secured for multiple rounds of follow-up. But, you may find yourself in a situation where your early rounds of data collection are so unpromising that you might give up hope of finding any effects. First, and pretty obviously, in such cases it would probably be better to return the money to the funder or switch it to a more productive endeavor. Second, funders of research may benefit from encouraging more research proposals to apply for phased in funding, conditional on not the production of data but evidence of promising results. Sure, there may be 'sleeper effects' and the like but it would be good for the researchers to try to propose ahead of time what defines success (or at least promising findings to keep going with data collection) at each round of impact analysis. This is related to Epstein and Klerman's recent paper asking when a program is ready for evaluation. Both funding of data collection and program implementation -- conditional on program success are, I think, promising avenues to pursue…

Thanks again,

Berk.