Tools of the Trade
http://blogs.worldbank.org/impactevaluations/taxonomy/term/3844/all
enFrom my mailbox: should I work with only a subsample of my control group if I have big take-up problems?
http://blogs.worldbank.org/impactevaluations/my-mailbox-should-i-work-only-subsample-my-control-group-if-i-have-big-take-problems
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">Over the past month I’ve received several versions of the same question, so thought it might be useful to post about it.<br />
<br />
Here’s one version:<br /><em>I have a question about an experiment in which we had a very big problem getting the individuals in the treatment group to take-up the treatment. Therefore we now have a treatment much smaller than the control. For efficiency reasons does it still make sense to survey all the control group, or should we take a random draw in order to have an equal number of treated and control?</em><br />
<br />
And another version</div></div></div>Mon, 20 Jul 2015 13:57:00 +0000David McKenzie1287 at http://blogs.worldbank.org/impactevaluationsAllocating Treatment and Control with Multiple Applications per Applicant and Ranked Choices
http://blogs.worldbank.org/impactevaluations/allocating-treatment-and-control-multiple-applications-applicant-and-ranked-choices
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">This came up in the context of work with Ganesh Seshan designing an evaluation for a computer training program for migrants. The training program was to be taught in one 3 hour class per week for several months. Classes were taught Sunday, Tuesday and Thursday evenings from 5-8 pm, and then there were four separate slots on Friday, the first day of the weekend. So in total there were 7 possible sessions people could potentially attend. However, most migrants would prefer to go on the weekend, and many would not be able to attend on particular days of the week.</div></div></div>Tue, 07 Jul 2015 04:54:00 +0000David McKenzie1283 at http://blogs.worldbank.org/impactevaluationsEndogenous stratification: the surprisingly easy way to bias your heterogeneous treatment effect results and what you should do instead
http://blogs.worldbank.org/impactevaluations/endogenous-stratification-surprisingly-easy-way-bias-your-heterogeneous-treatment-effect-results-and
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>
A common question of interest in evaluations is “which groups does the treatment work for best?” A standard way to address this is to look at heterogeneity in treatment effects with respect to baseline characteristics. However, there are often many such possible baseline characteristics to look at, and really the heterogeneity of interest may be with respect to outcomes in the absence of treatment. Consider two examples:<br />
A: A vocational training program for the unemployed: we might want to know if the treatment helps more those who were likely to stay unemployed in the absence of an intervention compared to those who would have been likely to find a job anyway.<br />
B: Smaller class sizes: we might want to know if the treatment helps more those students whose test scores would have been low in the absence of smaller classes, compared to those students who were likely to get high test scores anyway.<br /></div></div></div>Mon, 16 Mar 2015 13:20:00 +0000David McKenzie1239 at http://blogs.worldbank.org/impactevaluationsWhy is Difference-in-Difference Estimation Still so Popular in Experimental Analysis?
http://blogs.worldbank.org/impactevaluations/why-difference-difference-estimation-still-so-popular-experimental-analysis
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">David McKenzie pops out from under many empirical questions that come up in my research projects, which has not yet ceased to be surprising every time it happens, despite <a href="https://sites.google.com/site/decrgdmckenzie/publications-by-topic" rel="nofollow">his prolific production</a>. The last time it happened was a teachable moment for me, so I thought I’d share it in a short post that fits nicely under our “Tools of the Trade” tag.<br /><br /></div></div></div>Mon, 23 Feb 2015 14:23:00 +0000Berk Ozler1231 at http://blogs.worldbank.org/impactevaluationsTools of the Trade: a joint test of orthogonality when testing for balance
http://blogs.worldbank.org/impactevaluations/tools-trade-joint-test-orthogonality-when-testing-balance
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">This is a very simple (and for once short) post, but since I have been asked this question quite a few times by people who are new to doing experiments, I figured it would be worth posting. It is also useful for non-experimental comparisons of a treatment and a control group.<br /></div></div></div>Wed, 04 Feb 2015 14:22:00 +0000David McKenzie1222 at http://blogs.worldbank.org/impactevaluationsCurves in all the wrong places: Gelman and Imbens on why not to use higher-order polynomials in RD
http://blogs.worldbank.org/impactevaluations/curves-all-wrong-places-gelman-and-imbens-why-not-use-higher-order-polynomials-rd
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">A good regression-discontinuity can be a beautiful thing, as Dave Evans illustrates in a <a href="http://blogs.worldbank.org/impactevaluations/regression-discontinuity-porn" rel="nofollow">previous post</a>. The typical RD consists of controlling for a smooth function of the forcing variable (i.e. the score that has a cut-off where people on one side of the cut-off get the treatment, and those on the other side do not), and then looking for a discontinuity in the outcome of interest at this cut-off. A key practical problem is then how exactly to control for the forcing variable.<br /><br /></div></div></div>Mon, 08 Sep 2014 15:10:00 +0000David McKenzie1154 at http://blogs.worldbank.org/impactevaluationsTools of the Trade: Graphing Impacts with Standard Error Bars
http://blogs.worldbank.org/impactevaluations/tools-trade-graphing-impacts-standard-error-bars
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">This week I finally got around to learning how to make a graph which displays the means of different treatment groups for a range of outcomes, along with standard error bars to show whether there is a significant difference between groups. Here is an example:<br /><img alt="" src="http://blogs.worldbank.org/impactevaluations/files/impactevaluations/GraphingImpactsStata.jpg" style="height:375px; width:500px" /><br /></div></div></div>Sat, 08 Feb 2014 01:29:00 +0000David McKenzie1082 at http://blogs.worldbank.org/impactevaluationsTools of the trade: recent tests of matching estimators through the evaluation of job-training programs
http://blogs.worldbank.org/impactevaluations/tools-trade-recent-tests-matching-estimators-through-evaluation-job-training-programs
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">Of all the impact evaluation methods, the one that consistently (and justifiably) comes last in the methods courses we teach is <em>matching</em>. We de-emphasize this method because it requires the strongest assumptions to yield a valid estimate of causal impact. Most importantly this concerns the assumption of <em>unconfoundedness</em>, namely that selection into treatment can be accurately captured solely as a function of observable covariates in the data.</div></div></div>Wed, 05 Jun 2013 13:44:00 +0000Jed Friedman997 at http://blogs.worldbank.org/impactevaluationsTools of the trade: when to use those sample weights
http://blogs.worldbank.org/impactevaluations/tools-of-the-trade-when-to-use-those-sample-weights
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><span style="LINE-HEIGHT: 115%; FONT-SIZE: 11pt">In numerous discussions with colleagues I am struck by the varied views and confusion around whether to use sample weights in regression analysis (a confusion that I share at times). A <a href="http://www.nber.org/papers/w18859"><font color="#0000ff">recent working paper</font></a> by Gary Solon, Steven Haider, and Jeffrey Wooldridge aims at the heart of this topic. It is short and comprehensive, and I recommend it to all practitioners confronted by this question.</span></p></div></div></div>Wed, 13 Mar 2013 13:01:04 +0000Jed Friedman960 at http://blogs.worldbank.org/impactevaluations“Oops! Did I just ruin this impact evaluation?” Top 5 of mistakes and how the new Impact Evaluation Toolkit can help.
http://blogs.worldbank.org/impactevaluations/oops-did-i-just-ruin-this-impact-evaluation-top-5-of-mistakes-and-how-the-new-impact-evaluation-tool
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>On October 3rd, I sent out a survey asking people what was the biggest, most embarrassing, dramatic, funny, or other oops mistake they made in an impact evaluation. Within a few hours, a former manager came into my office to warn me: “Christel, I tried this 10 years ago, and I got exactly two responses.” </p></div></div></div>Wed, 12 Dec 2012 14:19:02 +0000Christel Vermeersch924 at http://blogs.worldbank.org/impactevaluationsTools of the Trade: Intra-cluster correlations
http://blogs.worldbank.org/impactevaluations/tools-of-the-trade-intra-cluster-correlations
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>In clustered randomized experiments, random assignment occurs at the group level, with multiple units observed within each group. For example, education interventions might be assigned at the school level, with outcomes measured at the student level, or microfinance interventions might be assigned at the savings group level, with outcomes measured for individual clients.</p></div></div></div>Sun, 02 Dec 2012 21:03:18 +0000David McKenzie919 at http://blogs.worldbank.org/impactevaluationsTools of the Trade: A quick adjustment for multiple hypothesis testing
http://blogs.worldbank.org/impactevaluations/tools-of-the-trade-a-quick-adjustment-for-multiple-hypothesis-testing
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>As our impact evaluations broaden to consider more and more possible outcomes of economic interventions (an extreme example being the 334 unique outcome variables considered by <a href="http://elsa.berkeley.edu/~emiguel/pdfs/miguel_gbf.pdf"><font color="#0000ff">Casey et al.</font></a> in their CDD evaluation) and increasingly investigate the channels of impact through subgroup heterogeneity analysis, the <b>issue of multiple hypothesis testing </b>is gaining increasing prominence.</p></div></div></div>Mon, 22 Oct 2012 01:40:15 +0000David McKenzie893 at http://blogs.worldbank.org/impactevaluationsTools of the trade: The covariate balanced propensity score
http://blogs.worldbank.org/impactevaluations/tools-of-the-trade-the-covariate-balanced-propensity-score
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><span style="FONT-SIZE: 10pt">The primary goal of an impact evaluation study is to estimate the causal effect of a program, policy, or intervention. Randomized assignment of treatment enables the researcher to draw causal inference in a relatively assumption free manner. If randomization is not feasible there are more assumption driven methods, termed quasi-experimental, such as regression discontinuity or propensity score matching. For many of our readers this summary is nothing new. But fortunately in our “community of practice” new statistical tools are developed at a rapid rate.</span></p></div></div></div>Wed, 03 Oct 2012 12:53:24 +0000Jed Friedman878 at http://blogs.worldbank.org/impactevaluationsHelp for attrition is just a phone call away – a new bounding approach to help deal with non-response
http://blogs.worldbank.org/impactevaluations/help-for-attrition-is-just-a-phone-call-away-a-new-bounding-approach-to-help-deal-with-non-response
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Attrition is a bugbear for most impact evaluations, and can cause even the best designed experiments to be subject to potential bias. In a <a href="http://www.iza.org/en/webcontent/publications/papers/viewAbstract?dp_id=6751"><font color="#0000ff">new paper</font></a>, Luc Behaghel, Bruno Crépon, Marc Gurgand and Thomas Le Barbanchon describe a clever new way to deal with this problem using information on the number of attempts it takes to get someone to respond to a survey.</p></div></div></div>Mon, 24 Sep 2012 01:40:32 +0000David McKenzie871 at http://blogs.worldbank.org/impactevaluationsWhether to probit or to probe it: in defense of the Linear Probability Model
http://blogs.worldbank.org/impactevaluations/whether-to-probit-or-to-probe-it-in-defense-of-the-linear-probability-model
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>Last week David linked to a virtual discussion involving Dave Giles and Steffen Pischke on the merits or demerits of the Linear Probability Model (LPM).</p></div></div></div>Wed, 18 Jul 2012 12:54:39 +0000Jed Friedman846 at http://blogs.worldbank.org/impactevaluations