- In the latest JEP, how to write an effective referee report: With three specific recommendations: I) Make clear the contribution, and give appropriate value to innovative work: “The importance of a contribution can be undervalued in some cases by referees and editors. After all, papers that are more ambitious are often more likely to have loose ends, which gives referees and editors a reason to avoid taking a chance on them” II) divide comments into two clearly demarcated sections: 1) problems that make the paper unpublishable, which (if revision is invited) must be addressed before the paper is publishable; and 2) problems that are not essential for the publishability of the paper, which should labeled as “suggestions.”; and III) In making requests of authors, weigh the costs of the request. It is not enough that a particular request will improve the paper. The benefits must exceed the costs, so that the improvement has positive net present value. Since the author bears the costs, it is easy for a referee to make absurd demands thoughtlessly. Don’t.” – and finally, after receiving multiple 5+ page referee reports recently, I agree with “Unless a referee needs to make extremely technical points, 2–3 pages should be sufficient.”
Cash transfers are great – lots of people are telling you that on a continuous basis. However, it is an open question as to whether such programs can improve the wellbeing of their beneficiaries well after the cessation of support. As cash transfer programs continue to grow as major vehicles for social protection, it is increasingly important to understand if these programs break the cycle of intergenerational poverty, or whether the benefits simply evaporate when the money runs out…
- In Science last week, Rema Hanna summarizes several studies that look at the ways technology such as biometric smartcards is helping to reduce corruption in transfer programs.
- Marc Bellemare has a second post on dealing with imperfect instruments.
- The Statablog on how to use the command putexcel to export your Stata output into nice fancy tables.
Impact evaluations of interventions aiming at reducing intra-partner and sexual and gender-based violence (IPV-SGBV) have mostly failed at detecting statistically significant impacts.
Randomized controlled trials are kind of a big deal in development economics right now. A recent article in The Economist shows a sizeable rise in the use of RCTs in economics overall over the last 15 years, and recent analysis by David McKenzie shows that RCTs make up a large minority of development papers in top journals (see the figures below).
Source: The Economist on the left; McKenzie (2016) on the right.
In his new book Experimental Conversations: Perspectives on Randomized Trials in Development Economics, Tim Ogden has assembled interviews with a distinguished group that interacts with RCTs in every imaginable way: you have those who pioneered the use of the method in development economics, the next generation of researchers, the chief critics of the method, and consumers of development RCTs at organizations like GiveWell, the Ford and Grameen Foundations, and the Center for Global Development. You also hear from one broader observer of economics as a field (Tyler Cowen) and one of the scholars who pioneered the use of RCTs in U.S. policy (Judy Gueron), to give added perspective.
- Interview with Mark Rosenzweig: “One of the advantages of studying developing countries is that it’s cheap to collect data and the response rates are much higher than in the United States. I’ve helped lead a survey in the U.S. of 8,500 households — it cost $23 million. In India, where the questionnaire is probably eight times longer, the total cost is about $750,000... Five hours is a substantial commitment of time, what’s the response rate? Our response would be somewhere around 90%. People enjoy telling you about their stuff. I’ve surveyed a lot of farmers in India and they want to show you everything. They enjoy it. People there value their time differently than we do. In most villages there are no cinemas or shopping centers there. There’s no television. They enjoy talking to people. That’s different than here. We all have better things to do than sitting down and answering silly questions over the phone, let alone allowing somebody into your house. Sitting down and talking to people is an interesting activity for these folks.”
One of the things I get asked when people are designing experiments – when they are either interested in or worried about spillover effects – is how to divvy up the clusters into treatment and control and what share of individuals within treatment clusters to assign within-cluster controls. The answer seems straightforward – it may look intuitive to assign a third to each group and I have seen a few designs that have done this, but it turns out that it’s a bit more complicated than that. There was no software that I am aware of that helped you with such power calculations, until now...