Syndicate content

Blogs

Worm Wars: A Review of the Reanalysis of Miguel and Kremer’s Deworming Study

Berk Ozler's picture
This post follows directly from the previous one, which is my response to Brown and Wood’s (B&W) response to “How Scientific Are Scientific Replications?” It will likely be easier for you to digest what follows if you have at least read B&W’s post and my response to it. The title of this post refers to this tweet by @brettkeller, the responses to which kindly demanded that I follow through with my promise of reviewing this replication when it got published online.

Response to Brown and Wood's "How Scientific Are Scientific Replications? A Response"

Berk Ozler's picture
I thank Annette Brown and Benjamin Wood (B&W from hereon) for their response to my previous post about the 3ie replication window. It not only clarified some of the thinking behind their approach, but arrived at an opportune moment – just as I was preparing a new post on part 2 of the replication (or reanalysis as they call it) of Miguel and Kremer’s 2004 Econometrica paper titled “Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities,” by Davey et al. (2014b) and the response (Hicks, Kremer, and Miguel 2014b, HKM from hereon).  While I appreciate B&W’s clarifications, I respectfully disagree on two key points, which also happen to illustrate why I think the reanalysis of the original data by Davey et al. (2014b) ends up being flawed.

How scientific are scientific replications? A response by Annette N. Brown and Benjamin D.K. Wood

A few months ago, Berk Ozler wrote an impressive blog post about 3ie’s replication program that posed the question “how scientific are scientific replications?” As the folks at 3ie who oversee the replication program, we want to take the opportunity to answer that question. Our simple answer is, they are not meant to be.

Weekly links January 23: aid vs conflict, nudging Guatemalans, how the poor think, and more…

David McKenzie's picture
  • Soap Operas and Development: Business Week summarizes a lot of recent work and some ongoing work on using soap operas to change behaviors.
  • When the nudge unit went to Guatemala – results from efforts to increase tax collection from changes in the phrasing of tax letters etc.
  • The Deliberative Lives Project: “The goal of the project is to do something similar as “Portfolios of the Poor” or “Economic Lives of the Poor”, but for thoughts and decisions. A novel feature is that everyone can contribute to design and data analysis: the (de-identified) data will be posted online in real-time as it is collected, and anyone can download and analyze it. Similarly, questionnaires will be developed with input from anyone who wants to give it.”

 

What you don't know can hurt you: Malaria edition

Markus Goldstein's picture
You are feeling not so well.   You go to the doctor.   She is a good doctor.   She runs some tests, tells you nothing is wrong with you and you leave, ready to get back to work.   Why are you so much more ready to work now then you were before you saw your doctor?  
 

Can incentives lead to sustained impacts? The case of rewarding safe sex.

Damien de Walque's picture
Economists believe that incentives matter and that they can be used for changing people’s behaviors. Incentives are used for encouraging school attendance and performance or for increasing the coverage and quality of health care delivery. But a recurrent question is what happens once the incentives are discontinued? Are the incentives’ effects going to be sustained even after their payment is stopped because individuals would have been nudged towards a different behavior? Or are those effects going to die down and disappear once incentives are removed?

How standard is a standard deviation? A cautionary note on using SDs to compare across impact evaluations in education

Guest post by Abhijeet Singh
Last week on this blog, David wondered whether we should give up on using SDs for comparing effect sizes across impact evaluations. I wish that question was asked more often in the field of impact evaluations in education, where such comparisons are most rife. In this post, I explore some of the reasons why such comparisons might be flawed and what we might do to move towards less fragile metrics.

Blog links Jan 9 2015: Angrist and Niederle on pre-analysis, problems of phase-ins, French-speaking field coordinators needed, and more…

David McKenzie's picture
  • Field coordinator position: we are looking for a French speaker to help oversee surveys of informal firms in Benin. TOR and details.
  • Field coordinator position: three positions for French speakers to work with the Africa Gender Innovation Lab on Youth Employment projects.
  • Call for papers: the annual bank conference on Africa, to be held June 8-9 at Berkeley – submissions due Jan 31.

Pages