· Work for me this summer: I'm looking for someone with good Stata skills who can help work with data coming in from a couple of randomized experiments, as well as to help develop and design some new work on improving measurement of business profits in developing countries. The latter would include the use of some innovative experiments with RFID technology, which I don't know much about, so the summer intern would spend some time trying to set this up. The position would be DC-based, but there would be the possibility of a few weeks fieldwork depending on the interest of the person, and how quickly the RFID work can get up and running. This project is something that could potentially lead to a co-authored paper, depending on how the intern performs. Ideally you should be a PhD student in a top program, or otherwise an exceptional undergraduate. Email me your CV and a cover letter describing your qualifications/interests if you are interested – I will only reply if I want to follow up with you.
· Work for Markus: Job opening with the Gender Innovation Lab at the World Bank: to work on Private Sector Development (PSD) and Agriculture and Rural Development (ARD) and gender in Africa, namely on a set of rigorous impact evaluation studies, including working with various partners on the design of innovative interventions to address gender inequality, as well as on the rigorous impact evaluations, including design, implementation, and data analysis.
· The long-term impacts of a CCT in Nicaragua – J-PAL summarizes work by Tania Barham, Karen Macours and John Maluccio : “By 2010, seven years after the early treatment group stopped receiving the transfers, boys in the early treatment group still had nearly half a year more schooling than those in the late treatment group. The increase in years of schooling was accompanied by gains in learning…However, no significant impact was found on cognition, as measured by the Raven test, consistent with cognitive development taking place mostly during early childhood.”
· The demand for evidence? Chris Blattman summarizes an experiment which tested microfinance organizations’ willingness to learn about studies that are more positive or more negative in their findings for microfinance – organizations are twice as likely to want the information if told it is positive.
· Paper with a cool title but a serious implication: Star Wars: The Empirics Strike Back - “Using 50,000 tests published between 2005 and 2011 in the AER, JPE, and QJE, we identify a residual in the distribution of tests that cannot be explained by selection. The distribution of p-values exhibits a camel shape with abundant p-values above 0.25, a valley between 0.25 and 0.10 and a bump slightly below 0.05. The missing tests (with p-values between 0.25 and 0.10) can be retrieved just after the 0.05 threshold and represent 10% to 20% of marginally rejected tests. Our interpretation is that researchers might be tempted to inflate the value of those almost-rejected tests by choosing a “significant” specification.”. The figure here shows this:
The good news for those of us doing experiments is that this seems to be much less of a problem for experiments
Other interesting finding are that papers published by tenured and older researchers are less
prone to this issue; inflation seems larger in articles that thank research assistants; but it does not vary along data and codes availability on journals' website (h/t @JustinSandefur)