Syndicate content

Blog links June 13, 2015: replication, beyond the top 5, the limits of meta-analysis, career motivations for health workers and more…

David McKenzie's picture
  • No top 5s, no worries: On VoxEU John Gibson summarizes some of his new research which looks at the relationship between top 5 journal publications and salaries in the UC system “Even among tenured full professors, one in six have never published in the top-five journals; these without-top-five economists are found in eight of the nine University of California economics departments….Turning to the effect on salary, conditional on overall research output, there is no statistically significant penalty for economists lacking an article in top-five journals.”
  • 3ie has released the first of its papers replicating an existing study – a paper replicating Jensen and Oster’s QJE paper on the effects of cable television on women’s status in rural India. Jensen and Oster respond “We recognize the scientific merit of replication and fully support such efforts. We also appreciate the willingness of the authors to take on this (often contentious, and thankless) process, as well as the considerable amount of effort the authors have put  into their study. However, we disagree with the content and conclusions of their replication…It should also be emphasized to replicating authors that the goal of a replication is not to overturn the results of a previous paper.”.  The last section of their detailed reply has recommendations on how to improve the replication process.
  • Cyrus Samii on the limits of meta-analysis and “best evidence synthesis” as an alternative: “in very few cases will we have enough high quality and truly comparable studies for meta-analytic methods to be applied in a way that makes sense…we end up with reviews that compromise either on study quality standards or comparability standards so as to obtain a large enough set of studies to fulfill the sample size needs for conducting a meta analysis! These compromises are masked by the use of generic effect size metrics. This is the ultimate in the tail wagging the dog, and the result is a lot of crappy evidence synthesis”
  • Microcredit and poverty alleviation – the Economist’s Free Exchange blog on the Morocco microfinance RCT
  • Please do not teach this woman to fish – Foreign Policy on the limits to small-scale entrepreneurship in developing countries.
  • Poverty under the microscope – The Chronicle of Higher Education discusses the to and fro about RCTs and development with quotes from many of the usual suspects.
  • On the LSE Impact of social sciences blog, Orianna Bandiera discusses her research in Zambia testing the importance of emphasizing social motivations versus career motivations in recruiting health workers. Emphasizing career motivations attracts higher quality workers.
Finally for those in the U.S., happy father’s day this weekend – here’s my old post on Dads and Development as a now annual salute.
 

Comments

Submitted by Heather on

On (internal) replications, 2 questions / thoughts. (1) At what point does the peer review process start involving replication, so that "kinks" (when they are there and that is what they are) can be addressed before publication and, perhaps more consequentially, before giving policy advice? (2) Would it be better or worse (or neither) to fund 2 replications at the same time, so that the exercise doesn't just result in one set of new results against the original but rather something that could maybe be a conversation, rather than a fight, starter.

Submitted by Annie on

The main problem with these 3ie replications is that they are meant to be massive exercises on data mining, as Jensen and Oster clearly point out. What other purpose a replicator could pursue other than to be able to claim that the original results were not robust. We know that already! Nothing is robust 100% robust. If you estimate 100 different specifications, you will find that in at least 5% of them, one effect vanishes, and possible that will happen many more times. However, many, if not most of those 100 alternative specifications are not necessarily better than the ones reported in the paper nor even right. Thus, publish a report at the 3ie website disputing a paper is nothing more than just that.

Submitted by Ben on

Thanks for highlighting the publication of our first 3ie-funded replication study. This first pilot study was a few years in the making and was commissioned before we established the policies for 3ie's replication programme. As Jensen and Oster mention in their comment, the policies and procedures we now have in place address some of their suggestions. Our notification and communication policy for replication research ensures original authors are informed beforehand of any public discussions of the replication study. It also outlines the replication plans we now require our researchers to publish before conducting their study (created partially to avoid "kitchen sink" replication studies). We believe there are many incentives to conduct (internal) replication studies beyond catching "mistakes". Good evidence-based policy making often requires better understanding of the robustness of research, an exploration of possible heterogeneous impacts of the findings, or confirmation of the intervention's causal chain.

Add new comment