Syndicate content

Blogs

The often (unspoken) assumptions behind the difference-in-difference estimator in practice

Jed Friedman's picture
This post is co-written with Ricardo Mora and Iliana Reggio
 
The difference-in-difference (DID) evaluation method should be very familiar to our readers – a method that infers program impact by comparing the pre- to post-intervention change in the outcome of interest for the treated group relative to a comparison group. The key assumption here is what is known as the “Parallel Paths” assumption, which posits that the average change in the comparison group represents the counterfactual change in the treatment group if there were no treatment. It is a popular method in part because the data requirements are not particularly onerous – it requires data from only two points in time – and the results are robust to any possible confounder as long as it doesn’t violate the Parallel Paths assumption. When data on several pre-treatment periods exist, researchers like to check the Parallel Paths assumption by testing for differences in the pre-treatment trends of the treatment and comparison groups. Equality of pre-treatment trends may lend confidence but this can’t directly test the identifying assumption; by construction that is untestable. Researchers also tend to explicitly model the “natural dynamics” of the outcome variable by including flexible time dummies for the control group and a parametric time trend differential between the control and the treated in the estimating specification.
 
Typically, the applied researcher’s practice of DID ends at this point. Yet a very recent working paper by Ricardo Mora and Iliana Reggio (two co-authors of this post) points out that DID-as-commonly-practiced implicitly involves other assumptions instead of  Parallel Paths, assumptions perhaps unknown to the researcher, which may influence the estimate of the treatment effect. These assumptions concern the dynamics of the outcome of interest, both before and after the introduction of treatment, and the implications of the particular dynamic specification for the Parallel Paths assumption.

It’s Time Again for Submissions for our Annual Blog Your Job Market Paper Series

David McKenzie's picture
We are pleased to launch for the third year a call for PhD students on the job market to blog their job market paper on the Development Impact blog.  We welcome blog posts on anything related to empirical development work, impact evaluation, or measurement. For examples, you can see last years series. We will follow the same process laid out by Berk last year, which is as follows:

IO and Development Part 3: Where are some opportunities for work intersecting these areas?

David McKenzie's picture
The first two posts on this topic this week have looked at the gap in the use of IO in development, and some possible reasons why IO tools might not be used as much. Today, the final post in my Q&A with Dan Keniston [DK] and Katja Seim [KS], looks at where there might be low-hanging fruit from better use of methods from IO in development.

Why don’t we see more work at the intersection of IO and Development? Part Two - methods

David McKenzie's picture
Yesterday’s Q&A with Dan Keniston [DK] and Katja Seim [KS] looked at whether there was a gap in the use of IO methods in development, and for some examples of good work at this intersection of fields. Today I ask about a couple of reasons why we don’t see as much work in this area.

Why don’t we see more work at the intersection of IO and Development? Part One – is there a gap?

David McKenzie's picture
Ever since I was in grad school I remember hearing people say that development and industrial organization (IO) seem like natural fields for graduate students to specialize in, and yet my sense is that far fewer people take this combination than development and labor, or development and public economics for example. This is seen in literature produced – the figure below shows the share of the last 100 BREAD Working Papers in development that have different subfields (according to their JEL codes):

Friday links November 8: Halloween redux, aspirations, rants against the wrong questions, and more…

David McKenzie's picture
  • On the CSAE blog – the reverse couch potato effect-  the impact of inspirational movies on aspirations and short-term behavior – new work by Stefan Dercon, Tanguy Bernard, Kate Orkin and Alemayehu Taffesse. The blog post has a couple of examples of the movies used to show people in rural Ethiopia how people like them had made choices that had led to success.

Policy learning with impact evaluation and the “science of delivery”

Jed Friedman's picture
The “science of delivery”, a relatively new term among development practitioners, refers to the focused study of the processes, contexts, and general determinants of the delivery of public services and goods. Or to paraphrase my colleague Adam Wagstaff, the term represents a broadening of inquiry towards an understanding of the “how to deliver” and not simply a focus on the “what to deliver”.
 

Pages