Syndicate content

David McKenzie's blog

When should you cluster standard errors? New wisdom from the econometrics oracle

David McKenzie's picture

In ancient Greek times, important decisions were never made without consulting the high priestess at the Oracle of Delphi.  She would deliver wisdom from the gods, although this advice was sometimes vague or confusing, and was often misinterpreted by mortals. Today I bring word that the high priestess and priests (Athey, Abadie, Imbens and Wooldridge) have delivered new wisdom from the god of econometrics on the important decision of when should you cluster standard errors. This is definitely one of life’s most important questions, as any keen player of seminar bingo can surely attest. In case their paper is all greek to you (half of it literally is), I will attempt to summarize their recommendations, so that your standard errors may be heavenly.

Weekly links October 13: an anthropological rationale for randomization, what is Jholawala Economics?, changing norms, and more…

David McKenzie's picture
  • Another reason to justify random selection – Michael Schulson in Aeon “there are plenty of situations when random chance really is your best option. And those situations might be far more prevalent in our modern lives than we generally admit.” An interesting discussion drawing on anthropology of how different cultures have introduced randomness into decision-making, with the advantage being that it stops you using bad reasons for making decisions. “we might want to come to terms with the reality of our situation, which is that our lives are dominated by uncertainty, biases, subjective judgments and the vagaries of chance”
  • Maitreesh Ghatak reviews Jean Dreze’s new book “Sense and Solidarity - Jholawala Economics for Everyone”. See also this twitter thread by Abhijeet Singh on whether Dreze is underappreciated in development economics.

Weekly links October 6: A Bridge too far for Jishnu, reducing recruiting information frictions, cash transfers in Niger, improving tax collection in Brazil, and more…

David McKenzie's picture
  • On the future development blog, Jishnu Das discusses recent experiments on public-private provision of education in Liberia and Pakistan, takes on Bridge Academies, and highlights the importance of good measurement: “in Liberia, Romero et al. tracked students to ensure that schools could not “game” the evaluation by sending weaker children home: “We took great care to avoid differential attrition: Enumerators conducting student assessments participated in extra training on tracking and its importance, and dedicated generous time to tracking. Students were tracked to their homes and tested there when not available at school. Finding children who have left a school is like finding a needle in a haystack. In a country where only 42 percent have access to a cell phone, it’s heroism.”
  • On Straight Talk on Evidence, James Heckman and co-authors get taken to task for torturing data to overstate findings in a 2014 Science article on the long-term effects of the Abecedarian ECD program. Specific criticisms on sample size (and its reporting) and multiple comparisons. Response and a rejoinder follow the post...

Finally, a way to do easy randomization inference in Stata!

David McKenzie's picture

Randomization inference has been increasingly recommended as a way of analyzing data from randomized experiments, especially in samples with a small number of observations, with clustered randomization, or with high leverage (see for example Alwyn Young’s paper, and the books by Imbens and Rubin, and Gerber and Green). However, one of the barriers to widespread usage in development economics has been that, to date, no simple commands for implementing this in Stata have been available, requiring authors to program from scratch.

This has now changed with a new command ritest written by Simon Hess, a PhD student who I met just over a week ago at Goethe University in Frankfurt. This command is extremely simple to use, so I thought I would introduce it and share some tips after playing around with it a little. The Stata journal article is also now out.

How do I get this command?
Simply type findit ritest in Stata.
[edit: that will get the version from the Stata journal. However, to get the most recent version with a couple of bug fixes noted below, type

net describe ritest, from(

Weekly links September 29: mixed methods (not just for footnotes), parenting in China, step away from that quadratic, and more…

David McKenzie's picture

Six Questions with Chris Udry

David McKenzie's picture
This is the first in a potential new series of posts of short interviews with development economists. Chris Udry was one of the pioneers of doing detailed fieldwork in development as a grad student and has continued to be one of the most respected leaders in the profession. While at Yale he taught David, and advised both David and Markus, and is famous for the amount of time he puts into his grad students. Most recently he has moved from Yale to Northwestern. We thought this might be a good time for him to reflect on his approach to teaching and advising, and to share his thoughts on some of the emerging issues/trends in development economics.
  1. Let’s start with your approach to teaching development economics at the graduate level. The class when you taught David in 1999 was heavy on the agricultural household model and understanding micro development through different types of market failures. Most classes would involve in-depth discussion of one or at most two papers, with a student assigned most weeks to lead this discussion. There was a lot of discussion of the empirical methods in different papers, but no replication tasks and the only empirical work was as part of a term paper. How has your approach to teaching development changed (or not) since this time?

Try as I might, I have made little progress on changing my basic approach to teaching. The papers and topics have changed, but the essence of my graduate teaching remains the in-depth discussion of a paper or two each class. I’ve tried to expand the use of problem sets, and had a number of years of replication assignments. The first was hindered by my own inadequate energy (it’s hard making up decent questions!). I found that replication exercises required too much time and effort in data cleaning by students relative to their learning gain. Students were spending too much time cleaning, merging and recreating variables and too little time thinking about the ideas in the paper. I’ll reassess assigning replication this year, because there may now be enough well-documented replication datasets and programs available. With these as a starting point, it would be possible to get quickly into substantive issues in the context of a replication.

Weekly links September 15: the definitive what we know on Progresa, ethics of cash, a new approach to teaching economics, and more…

David McKenzie's picture
  • In the latest JEL, Parker and Todd survey the literature on Progresa/Oportunidades: some bits of interest to me included:
    • CCTs have now been used in 60+ countries;
    • over 100 papers have been published using the Progresa/Oportunidades data, with at least 787 hypotheses tested – multiple testing corrections don’t change the conclusions that the program had health and education effects, but do cast doubt on papers claiming impacts on gender issues and demographic outcomes;
    • FN 16 which notes that at the individual level, there are significant differences in 32% of the 187 characteristics on which baseline balance is tested, with the authors arguing that this is because the large sample size leads to a tendency to reject the null at conventional levels – a point that seems inconsistent with use of the same significant levels for measuring treatment effects;
    • Two decades later, we still don’t know whether Progresa led to more learning, just more years in school;
    • One of the few negative impacts is an increase in deforestation in communities which received the CCT
  • Dave Evans asks whether it matters which co-author submits a paper, and summarizes responses from several editors; he also gives a short summary of a panel on how to effectively communicate results to policymakers.

Weekly links September 8: career advice, measuring empowerment, is anyone reading, lumpy cash, and more…

David McKenzie's picture

Is it possible to re-interview participants in a survey conducted by someone else?

David McKenzie's picture

I recently received an email from a researcher who was interested in trying to re-interview participants in one of my experiments to test several theories about whether that intervention had impacts on political participation and other political outcomes. I get these requests infrequently, but this is by no means the first. Another example in the last year was someone who had done in-depth qualitative interviews on participants in a different experiment of mine, and then wanted to be able to link their responses on my surveys to their responses on his. I imagine I am not alone in getting such requests, and I don’t think there is a one-size-fits-all response to when this can be possible, so thought I would set out some thoughts about the issues here, and see if others can also share their thoughts/experiences.

Confidentiality and Informed Consent: typically when participants are invited to respond to a survey or participate in a study they are told i) that the purpose of the survey is X ,and will perhaps involve a baseline survey and several follow-ups; and ii) all responses they provide will be kept confidential and used for research purposes only. These factors make it hard to then hand over identifying information about respondents to another researcher.
However, I think this can be addressed via the following system:

Monthly links for August: What did you miss while we were on summer break?

David McKenzie's picture