- The second edition of the book Impact Evaluation in Practice by Paul Gertler, Sebastian Martinez, Patrick Premand, Laura Rawlings and Christel Vermeersch is now available. For free online! “The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation.”
- Interesting Priceonomics piece on R.A. Fisher, and how he fought against the idea that smoking causes cancer
- Oxfam blog post on power calculations for propensity score matching
The importance of airlines for research and growth:
- VoxEU piece on work showing that when Southwest airlines opens up routes between U.S. cities, more scientific collaboration in Chemistry occurs (diff-in-diff) and NPR story on this
- New working paper by Campante and Yanagizawa-Drott uses discontinuity in connectiveness of cities at 6,000 miles to show air links have positive impact on economic activity by inducing links between businesses. The Kennedy school provides a media summary here.
- Ed Glaeser on the myths about infrastructure spending and jobs – in City Journal.
- Lindy Kanan on the trials and tribulations of being a mum who wants to work on development on the Devpolicy blog.
- On the Future Development blog, Matt Groh and Tara Vishwanath report on surveying Syrian refugees and residents of host countries – including discussion of how to do a Facebook survey of refugees, and how what people fear varies by country – natives in Europe don’t fear the refugees taking their jobs, but do fear their culture changing; while natives in Jordan fear the reverse.
- Chris Blattman puts together all his various pieces of advice on the undergrad-MA-PhD-academic process.
Quy-Toan Do (World Bank), with Andrei Levchenko (University of Michigan) and Lin Ma (National University of Singapore)
As the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) convenes its 17th Conference of the Parties later this month, the elephant conservation policy space continues to be polarized, with some countries advocating for a continuation of the complete ban on international legal trade in ivory while others, such as Namibia and Zimbabwe proposing to resume a regulated international trade in their legal ivory stocks. The legal ivory trade is generally opposed by countries with small or declining elephant populations that are against the consumptive use of wildlife. They fear that a legal trade will increase demand for ivory and thereby increase poaching in their countries. On the other hand, the legal trade is supported by countries with stable or growing elephant populations, who believe in sustainable consumptive use. They feel that a continued ban on the ivory trade penalizes them for their conservation successes and removes an important incentive for the conservation of elephants and other wildlife and their habitats by providing funding for management and incentives to local communities.
Scenario 3 (SCORE DATA AVAILABLE, AT LEAST PRELIMINARY OUTCOME DATA AVAILABLE; OR SIMULATED DATA USED): The context of data being available seems less usual to me in the planning stages of an impact evaluation, but could be possible in some settings (e.g. you have the score data and administrative data on a few outcomes, and then are deciding whether to collect survey data on other outcomes). But more generally, you will be in this stage once you have collected all your data. Moreover, the methods discussed here can be used with simulated data in cases where you don’t have data.
There is then a new Stata package rdpower written by Matias Cattaneo and co-authors that can be really helpful in this scenario (thanks also to him for answering several questions I had on its use). It calculates power and sample sizes, assuming you are then going to be using the rdrobust command to analyze the data. There are two related commands here:
- rdpower: this calculates the power, given your data and sample size for a range of different effect sizes
- rdsampsi: this calculates the sample size you need to get a given power, given your data and that you will be analyzing it with rdrobust.
- Interesting GiveDirectly blogpost about people refusing to accept their cash transfers in one part of Kenya: typically 95% of people have accepted the transfers, but in one new county 45% of households were declining
- An introduction to machine learning for Economists – a nice set of links to papers, examples, statistical software etc. by Anton Tatasenko.
- Uri Simonsohn on why a new AER paper can’t replicate the famous Bertrand and Mullainathan (BM) audit study findings – his contention is that the typically black names used by BM are also associated with low socioeconomic status, which he provides some evidence for.
- If you want a little refresher in key concepts of development economics thinking, Marc Bellemare has been discussing non-separability, heterogeneity, and this week non-anonymity.
Part 1 covered the case where you have no data. Today’s post considers another common setting where you might need to do RD power calculations.
Scenario 2 (SCORE DATA AVAILABLE, NO OUTCOME DATA AVAILABLE): the context here is that assignment to treatment has already occurred via a scoring threshold rule, and you are deciding whether to try and collect follow-up data. For example, referees may have given scores for grant applications, and proposals with scores above a certain level got funded, and now you are deciding whether to collect outcomes several years later to see whether the grants had impacts; or kids may have sat a test to get into a gifted and talented program, and now you want to see whether to collect to data on how these kids have done in the labor market.
Here you have the score data, so don’t need to make assumptions about the correlation between treatment assignment and the score, but can use the actual correlation in your data. However, since the optimal bandwidth will differ for each outcome examined, and you don’t have the outcome data, you don’t know what the optimal bandwidth will be.
In this context you can use the design effect discussed in my first blog post with the actual correlation. You can then check with the full sample to see if you would have sufficient power if you surveyed everyone, and make an adjustment for choosing an optimal bandwidth within this sample using an additional multiple of the design effect as discussed previously. Or you can simulate outcomes and use the simulated outcomes along with the actual score data (see next post).
I haven’t done a lot of RD evaluations before, but recently have been involved in two studies which use regression discontinuity designs. One issue which comes up is then how to do power calculations for these studies. I thought I’d share some of what I have learned, and if anyone has more experience or additional helpful content, please let me know in the comments. I thank, without implication, Matias Cattaneo for sharing a lot of helpful advice.
One headline piece of information that I’ve learned is that RD designs have way less power than RCTs for a given sample, and I was surprised by how much larger the sample is that you need for an RD.
How to do power calculations will vary depending on the set-up and data availability. I’ll do three posts on this to cover different scenarios:
Scenario 1 (NO DATA AVAILABLE): the context here is of a prospective RD study. For example, a project is considering scoring business plans, and those above a cutoff will get a grant; or a project will be targeting for poverty, and those below some poverty index measure will get the program; or a school test is being used, with those who pass the test then being able to proceed to some next stage.
The key features here are that, since it is being planned in advance, you do not have data on either the score (running variable), or the outcome of interest. The objective of the power calculation is then to see what size sample you would need to have in the project and survey, and whether it is worth you going ahead with the study. Typically your goal here is to get some sense of order of magnitude – do I need 500 units or 5000?