Published on Development Impact

Weekly links December 18: rejecting rejections, measuring success in economics, underreporting in polisci, and more…

This page in:
  • In the BMJ Christmas edition, a nice form letter for how you can reject journal rejections: “As you are probably aware we receive many rejections each year and are simply not able to accept them all. In fact, with increasing pressure on citation rates and fiercely competitive funding structures we typically accept fewer than 30% of the rejections we receive… We do wish you and your editorial team every success with your rejections in the future and hope they find safe harbour elsewhere. To this end, may we suggest you send one to [insert name of rival research group] for consideration. They accept rejections from some very influential journals.”
  • From the political science replication blog: researchers looked at NSF proposals under the TESS program, and compares the pre-analysis plans and questionnaires to what was actually published, finding 80% of papers fail to report all experimental conditions and outcomes
  • Ashu Handa on why impact evaluations with governments are important – with an RDD graph at the end that will make Dave Evans very very sad.
  • In VoxEU, Dan Hamermesh on how to measure success in economics – “Properly judging success in economics requires paying attention to individual outcomes, not to aggregates that are poor signals of the individual results of which they are comprised” – basically pay less attention to easy metrics like which journals research is published in or which department a researcher teaches at.
  • On the CGD blog, Lant Pritchett discusses a new 3ie working paper on decision-focused vs knowledge-focused impact evaluations and how we need more of the former. Not as clear in argument as many Lant posts, but has this interesting nugget “In 1996 circumstances led me to be the World Bank “task manager” of record of an early RCT by Michael Kremer and Paul Glewwe.Hence I was an early non-adopter of this new technique because I could see it was leading research away from, not towards, key needed insights about the process of improving development projects and programs.For about 20 years I have been part of the antithesis.I have always maintained that RCTs got one claim right—using randomization in assignment was the best way to identify the causal impacts of particular interventions—but got everything else about the use and impact of RCTs on development policy and practice wrong”
  • Chris Blattman with more on the “do you need to cluster standard errors if randomization is at the individual level” question.
  • Job opening: The Africa Gender Innovation Lab is looking for a Field Coordinator to support the impact evaluation of the Western Growth Poles project in the DRC. One of the Field Coordinator’s main task will be to help us design and implement a childcare intervention that will be evaluated in an RCT. Experience and interest in early childhood development, in addition to impact evaluation skills, will be a strong plus.

Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000