Difference-in-difference
http://blogs.worldbank.org/impactevaluations/taxonomy/term/11847/all
enA Curated List of Our Postings on Technical Topics – Your One-Stop Shop for Methodology
http://blogs.worldbank.org/impactevaluations/curated-list-our-postings-technical-topics-your-one-stop-shop-methodology
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded">Rather than the usual list of Friday links, this week I thought I’d follow up on <a href="http://blogs.worldbank.org/impactevaluations/introducing-ask-guido" rel="nofollow">our post by Guido Imbens</a> yesterday on clustering and post earlier this week by <a href="http://blogs.worldbank.org/impactevaluations/hawthorne-effect-what-do-we-really-learn-watching-teachers-and-others" rel="nofollow">Dave Evans on Hawthorne effects</a> with a curated list of our technical postings, to serve as a one-stop shop for your technical reading.</div></div></div>Fri, 21 Feb 2014 12:46:26 +0000David McKenzie1087 at http://blogs.worldbank.org/impactevaluationsThe often (unspoken) assumptions behind the difference-in-difference estimator in practice
http://blogs.worldbank.org/impactevaluations/often-unspoken-assumptions-behind-difference-difference-estimator-practice
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even" property="content:encoded">This post is co-written with <a href="http://www.eco.uc3m.es/~ricmora/" rel="nofollow">Ricardo Mora</a> and <a href="http://www.eco.uc3m.es/~ireggio/" rel="nofollow">Iliana Reggio</a><br />
<br />
The difference-in-difference (DID) evaluation method should be very familiar to our readers – a method that infers program impact by comparing the pre- to post-intervention change in the outcome of interest for the treated group relative to a comparison group. The key assumption here is what is known as the “Parallel Paths” assumption, which posits that the average change in the comparison group represents the counterfactual change in the treatment group if there were no treatment. It is a popular method in part because the data requirements are not particularly onerous – it requires data from only two points in time – and the results are robust to any possible confounder as long as it doesn’t violate the Parallel Paths assumption. When data on several pre-treatment periods exist, researchers like to check the Parallel Paths assumption by testing for differences in the pre-treatment trends of the treatment and comparison groups. Equality of pre-treatment trends may lend confidence but this can’t directly test the identifying assumption; by construction that is untestable. Researchers also tend to explicitly model the “natural dynamics” of the outcome variable by including flexible time dummies for the control group and a parametric time trend differential between the control and the treated in the estimating specification.<br />
<br />
Typically, the applied researcher’s practice of DID ends at this point. Yet <a href="http://e-archivo.uc3m.es/handle/10016/16065" rel="nofollow">a very recent working paper</a> by Ricardo Mora and Iliana Reggio (two co-authors of this post) points out that DID-as-commonly-practiced implicitly involves other assumptions instead of Parallel Paths, assumptions perhaps unknown to the researcher, which may influence the estimate of the treatment effect. These assumptions concern the dynamics of the outcome of interest, both before and after the introduction of treatment, and the implications of the particular dynamic specification for the Parallel Paths assumption.<br /></div></div></div>Thu, 21 Nov 2013 12:41:00 +0000Jed Friedman1059 at http://blogs.worldbank.org/impactevaluations