- A webcast of the AEA panel on “publishing in economics journals: the curse of the top 5” (h/t @DurRobert) – Heckman, Akelof, Deaton, Fudenberg and Hansen discuss. Some interesting discussion and comments – Deaton notes he didn’t have any papers rejected until he was famous; Heckman had a lot of data, including this one which shows (first column) which journals account for most dissemination of the ideas of the top development economists – with WBER number 1:
David McKenzie's blog
In my very first experiment, Suresh de Mel, Chris Woodruff, and I gave small grants of capital to microenterprises in Sri Lanka. We found that these one-time grants had lasting impacts on firm profitability for male owners. However, despite these increases in firm profits, few owners made the leap from self-employed to hiring others.
In 2008 we therefore started a new experiment with a different group of Sri Lankan microenterprises, trying to see if we could help them make this transition to becoming employers. Eight years later, I’m delighted to finally have a working paper out with the results.
- Fiona Burlig blogs on her new paper about how to do more accurate power calculations for experiments that use panel data (more T). There is apparently also Stata code, but I haven’t been able to download it yet and play around with this.
- In time for those on the job market, the CSWEP newsletter has advice from a number of economists on how to discuss the dual career search process: lots of different perspectives and advice. One piece discusses how ambiguity aversion means it can be helpful to reveal your status, whatever this is. The majority seem to be suggesting you should disclose this information around the time you are invited for a flyout.
- How much does interviewing with bystanders around affect survey responses? The inaugural blog post from Kantar public Africa and Middle East reports that i) bystanders are present in about half the cases. Bystanders are mostly non-family and extended family members, such as neighbours, domestic staff, but also children - not the spouse. Ii) bystander presence has little effect on non-sensitive question responses, but do for some sensitive questions; iii) the presence of the husband or wife can sometimes improve accuracy.
- From the IDB Development that works blog – a program in Bolivia manages to reduce malnutrition, but made the kids overweight instead.
- Solving social problems with machine learning – Kleinberg, Ludwig and Mullainathan in the Harvard Business Review
- From VoxEU, the disappointing impact of employment subsidies in Spain (including discussion of using RD when firms sort around the threshold)
- Vox covers a new paper in Science on the impact of M-Pesa on poverty in Kenya
- The Book of Spurious Correlations
- JPAL/MIT Masters in data, economics, and development policy
- David Roodman at Givewell delves into depth at assessing the case for deworming
- What has Jane Austin got to do with how households respond to public works? Shwetlena Sabarwal explains on the Africa Can blog.
- Jishnu Das on why he is getting hate mail for his work on the informal health care sector in India
- Marc Bellemare on how to think systematically about selection
- Dave Evans's paper with Anna Popova on cash transfers and temptation goods just came out in EDCC. In a local version of "are working papers working?", he lays out the differences between the working paper and published versions here.
- A first look at Facebook’s high-resolution population maps on the World Bank’s Open data blog.
- Call for papers: the PACDEV conference will be held at UC Riverside on March 11; and the 2017 Symposium on Economic Experiments in Developing Countries will be at the University of East Anglia on April 20-21.
- Andrew Gelman on how to think more seriously about the design of exploratory studies
- Overcoming premature evaluation discussed on the From Poverty to Power blog “There is a growing interest in safe-fail experimentation, failing fast and rapid real time feedback loops…When it comes to complex setting there is a lot of merit in ‘crawling the design space’ and testing options, but I think there are also a number of concerns with this that should be getting more air time…it can simply take time for a program to generate positive tangible and measurable outcomes, and it maybe that on some measures a program that may ultimately be successful dips below the ‘its working’ curve on its way to that success…more importantly it ignores some key aspects of the complex adaptive systems in which programs are embedded…if we are serious about going beyond saying ‘context matters’ then exhortations to ‘fail fast’ need to be more thoroughly debated.”
- Don’t write a big block of text with no breaks: Whether it is several subheadings, some bullet points or numbered lists, or something else, make the blog post easier for readers to read by using something to break the text up. Remember, readers might be reading this on a mobile phone or skimming it quickly to see if they think it interesting to read, so having 2 pages of solid text with nothing else will not hold reader attention.
- Make sure to give magnitudes, not just significance: don’t just say “we found the program increased education for women”, but tell us by how much, and, where appropriate, some benchmark to help us tell whether this is a big or small effect.
- Hyperlink any references, and spell the authors’ names correctly.
- Get quickly to what you did, and make clear what your methods are: while general motivation for why what you are doing is important is useful, you should be able to make the case for why we should care in a paragraph or less – then we want to hear about what you did, and how you did this. Then give key details – if you do an experiment, make clear the sample sizes, unit of randomization etc.; if you do difference-in-differences, make clear why the parallel trends assumption seems reasonable and what checks you did; if you use an IV, discuss the exclusion restriction and why it seems reasonable; etc.
- Look at previous years for examples: e.g. here is Sam Asher’s, who we hired; here is Mounir Karadja’s explanation of using an IV; and here is Paolo Abarcar’s clear explanation of an experiment he did.
- job market series 2016
- The weekly FAIV from the Financial Access Initiative has a nice round-up of some of the exciting new papers presented at this year’s NEUDC
- Chris Blattman’s plethora of job market advice.
- Trade Diversion’s list of trade job market papers this year.
- Mathematica’s John Deke on “ big surprises on small experiments” argues that it can be possible to do credible randomized trials with only 6 to 10 clusters under some conditions.
- A new CEPR e-book on migration and refugees has lots of short succinct summaries of important research.
- VoxEU article by Cartwright and Deaton summarizes their new paper on the limits of RCTs.
- Funding opportunity for PhD students – JPAL has grants for research transparency offering significant financial support and tuition assistance for you to replicate papers before they are published – also a good opportunity to see new research firsthand.
I recently shared five failures from some of my impact evaluations. Since this is just scratching the surface of all the many ways I’ve experienced failures in attempting to conduct impact evaluations, I thought I’d share a second batch now too.
Case 4: working with a private bank in Uganda to offer business training to their clients, written up as a note here.