Syndicate content

Weekly links September 30: re-analysis and respective criticism, working with NGOs, off to the big city, and more…

David McKenzie's picture
  • Solomon Hsiang and Nitin Sekar respond to the guest post by Quy-Toan Do and co-authors which had re-analyzed their data to question whether a one-time legal sale of ivory had increased elephant poaching. They state “Their claims are based on a large number of statistical, coding, and inferential errors.  When we correct their analysis, we find that our original results hold for sites that report a large number of total carcasses; and the possibility that our findings are artifacts of the data-generating process that DLM propose is extremely rare under any plausible set of assumptions”.
    • We screwed up by hosting this guest post without checking that Do and co-authors had shared it with the original co-authors and had given them a chance to respond.
    • We do believe that blogs have an important role to play in discussing research (see also Andrew Gelman on this), but think Uri Simonsohn’s piece this week on how to civilly argue with someone else’s analysis has good practice ideas for both social media and refereeing – with sharing the discussion with authors beforehand when re-analysis is done being good practice. We will try to adhere to this better in the future.
    • We are waiting to see whether Do and co-authors have any further word, and plan on posting only one more summary on this after making sure both sides have iterated. We plan to avoid Elephant wars since worm wars were enough.
  • In somewhat related news, Dana Carney shows how to gracefully accept and respond to criticism over your earlier work.

Training teachers on the job: What we know, and why we know less than we should

Anna Popova's picture

or, why we need more systematic (and simply more) reporting on the nature of interventions

The hope. Last year, we reviewed six reviews of what interventions work to improve learning. One promising area of overlap across reviews had to do with training teachers who were already on the job (i.e., in-service teacher training or teacher professional development). Specifically, we proposed that “individualized, repeated teacher training, associated with a specific method of task” was associated with learning gains.

Dialing for Data: Enterprise Edition

Markus Goldstein's picture
Surveys are expensive.   And, in sub-Saharan Africa in particular, a big part of that cost is logistics – fuel, car-hire and the like.   So with the increasing mobile phone coverage more folks are thinking about, and actually using, phones in lieu of in person interviews to complete surveys.   The question is: what does that do to data quality?  

Weekly links September 23: yay for airlines x2, dig out those old audit studies, how to study better, and more…

David McKenzie's picture
  • The second edition of the book Impact Evaluation in Practice by Paul Gertler, Sebastian Martinez, Patrick Premand, Laura Rawlings and Christel Vermeersch is now available. For free online! “The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation.”
  • Interesting Priceonomics piece on R.A. Fisher, and how he fought against the idea that smoking causes cancer
  • Oxfam blog post on power calculations for propensity score matching
  • The importance of airlines for research and growth:

How do you scale up an effective education intervention? Iteratively, that’s how.

David Evans's picture
So you have this motivated, tightly controlled, highly competent non-government organization (NGO). And they implement an innovative educational experiment, using a randomized controlled trial to test it. It really seems to improve student learning. What next? You try to scale it or implement it within government systems, and it doesn’t work nearly as well.

You ran a field experiment. Should you then run a regression?

Berk Ozler's picture
Recently, a colleague came over for dinner and made the following statement: “Person X told me that Imbens is now saying that we should not be running regressions to estimate average treatment effects in experiments.” When I showed some sympathy for this statement while focusing more on making tortillas, she was resistant: it was clear she did not want to give up on regression models…

Weekly links September 16: infrastructure myths, surveying rare populations x 2, being a development mum, and more…

David McKenzie's picture

To trade or not to trade elephant ivory? That’s going to be the question.

Quy-Toan Do's picture

Quy-Toan Do (World Bank), with Andrei Levchenko (University of Michigan) and Lin Ma (National University of Singapore)
As the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) convenes its 17th Conference of the Parties later this month, the elephant conservation policy space continues to be polarized, with some countries advocating for a continuation of the complete ban on international legal trade in ivory while others, such as Namibia and Zimbabwe proposing to resume a regulated international trade in their legal ivory stocks. The legal ivory trade is generally opposed by countries with small or declining elephant populations that are against the consumptive use of wildlife. They fear that a legal trade will increase demand for ivory and thereby increase poaching in their countries. On the other hand, the legal trade is supported by countries with stable or growing elephant populations, who believe in sustainable consumptive use. They feel that a continued ban on the ivory trade penalizes them for their conservation successes and removes an important incentive for the conservation of elephants and other wildlife and their habitats by providing funding for management and incentives to local communities.

Power Calculations for Regression Discontinuity Evaluations: Part 3

David McKenzie's picture
This is my third, and final, in a series of posts on doing power calculations for regression discontinuity (see part 1 and part 2).
Scenario 3 (SCORE DATA AVAILABLE, AT LEAST PRELIMINARY OUTCOME DATA AVAILABLE; OR SIMULATED DATA USED): The context of data being available seems less usual to me in the planning stages of an impact evaluation, but could be possible in some settings (e.g. you have the score data and administrative data on a few outcomes, and then are deciding whether to collect survey data on other outcomes). But more generally, you will be in this stage once you have collected all your data. Moreover, the methods discussed here can be used with simulated data in cases where you don’t have data.

There is then a new Stata package rdpower written by Matias Cattaneo and co-authors that can be really helpful in this scenario (thanks also to him for answering several questions I had on its use). It calculates power and sample sizes, assuming you are then going to be using the rdrobust command to analyze the data. There are two related commands here:
  • rdpower: this calculates the power, given your data and sample size for a range of different effect sizes
  • rdsampsi: this calculates the sample size you need to get a given power, given your data and that you will be analyzing it with rdrobust.

Weekly links September 9: no to cash? Machine learning for economists, climate economics, stupid regulations unchanged and more…

David McKenzie's picture