Syndicate content

replication

Worm Wars: A Review of the Reanalysis of Miguel and Kremer’s Deworming Study

Berk Ozler's picture
This post follows directly from the previous one, which is my response to Brown and Wood’s (B&W) response to “How Scientific Are Scientific Replications?” It will likely be easier for you to digest what follows if you have at least read B&W’s post and my response to it. The title of this post refers to this tweet by @brettkeller, the responses to which kindly demanded that I follow through with my promise of reviewing this replication when it got published online.

Response to Brown and Wood's "How Scientific Are Scientific Replications? A Response"

Berk Ozler's picture
I thank Annette Brown and Benjamin Wood (B&W from hereon) for their response to my previous post about the 3ie replication window. It not only clarified some of the thinking behind their approach, but arrived at an opportune moment – just as I was preparing a new post on part 2 of the replication (or reanalysis as they call it) of Miguel and Kremer’s 2004 Econometrica paper titled “Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities,” by Davey et al. (2014b) and the response (Hicks, Kremer, and Miguel 2014b, HKM from hereon).  While I appreciate B&W’s clarifications, I respectfully disagree on two key points, which also happen to illustrate why I think the reanalysis of the original data by Davey et al. (2014b) ends up being flawed.

How scientific are scientific replications? A response by Annette N. Brown and Benjamin D.K. Wood

A few months ago, Berk Ozler wrote an impressive blog post about 3ie’s replication program that posed the question “how scientific are scientific replications?” As the folks at 3ie who oversee the replication program, we want to take the opportunity to answer that question. Our simple answer is, they are not meant to be.

Guest Post by Sebastian Galiani: Replication in Social Sciences: Generalization of Cause-and-Effect Constructs

I agree with the general point raised by Berk in his previous post in this blog (read it here). We need to discuss when and how to conduct scientific replication of existing research in social sciences. I also agree with him that, at least in economics, pure replication analysis –which in my view it is the only genuine replication analysis- is of secondary interest –I hope to return to this issue in a future contribution in this blog. Instead, I believe that we should emphasize replication of relevant and internally valid studies both in similar and different environments. There is now excessive confidence in the knowledge gathered by a single study in a particular environment, perhaps as a result of a misconstruction of the virtues of experimentation in social sciences. As Donald T. Campbell once wrote (1969):

Calling all skeptics

Markus Goldstein's picture

Have you seen an impact evaluation result that gives you pause? Well, now there’s an institutional way to check on results of already published evaluations.    3ie recently announced a program for replication. They are going to focus on internal validity – replicating the results with the existing data and/or using different data from the same population to check results (in some cases).