Tools of the Trade
http://blogs.worldbank.org/impactevaluations/taxonomy/term/3844/all
enList Experiments for Sensitive Questions – a Methods Bleg
http://blogs.worldbank.org/impactevaluations/list-experiments-sensitive-questions-methods-bleg
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>
About a year ago, I wrote <a href="http://blogs.worldbank.org/impactevaluations/issues-data-collection-and-measurement" rel="nofollow">a blog post</a> on issues surrounding data collection and measurement. In it, I talked about “list experiments” for sensitive questions, about which I was not sold at the time. However, now that I have a bunch of studies going to the field at different stages of data collection, many of which are about sensitive topics in adolescent female target populations, I am paying closer attention to them. In my reading and thinking about the topic and how to implement it in our surveys, I came up with a bunch of questions surrounding the optimal implementation of these methods. In addition, there is probably more to be learned on these methods to improve them further, opening up the possibility of experimenting with them when we can. Below are a bunch of things that I am thinking about and, as we still have some time before our data collection tools are finalized, you, our readers, have a chance to help shape them with your comments and feedback.</p>
</div></div></div>Mon, 08 May 2017 13:39:00 +0000Berk Ozler1539 at http://blogs.worldbank.org/impactevaluationsPower Calculations for Regression Discontinuity Evaluations: Part 3
http://blogs.worldbank.org/impactevaluations/power-calculations-regression-discontinuity-evaluations-part-3
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">This is my third, and final, in a series of posts on doing power calculations for regression discontinuity (see <a href="http://blogs.worldbank.org/impactevaluations/power-calculations-regression-discontinuity-evaluations-part-1" rel="nofollow">part 1</a> and <a href="http://blogs.worldbank.org/impactevaluations/power-calculations-regression-discontinuity-evaluations-part-2" rel="nofollow">part 2</a>).<br /><strong><em>Scenario 3 (SCORE DATA AVAILABLE, AT LEAST PRELIMINARY OUTCOME DATA AVAILABLE; OR SIMULATED DATA USED): </em></strong><em>The context of data being available seems less usual to me in the planning stages of an impact evaluation, but could be possible in some settings (e.g. you have the score data and administrative data on a few outcomes, and then are deciding whether to collect survey data on other outcomes). But more generally, you will be in this stage once you have collected all your data. Moreover, the methods discussed here can be used with simulated data in cases where you don’t have data.</em><br /><br />
There is then a new Stata package <a href="https://sites.google.com/site/rdpackages/rdpower" rel="nofollow"><em>rdpower</em></a> written by Matias Cattaneo and co-authors that can be really helpful in this scenario (thanks also to him for answering several questions I had on its use). It calculates power and sample sizes, assuming you are then going to be using the <em>rdrobust</em> command to analyze the data. There are two related commands here:
<ul><li>
<strong>rdpower: </strong>this calculates the power, given your data and sample size for a range of different effect sizes</li>
<li>
<strong>rdsampsi: </strong>this calculates the sample size you need to get a given power, given your data and that you will be analyzing it with rdrobust.</li>
</ul></div></div></div>Mon, 12 Sep 2016 12:54:00 +0000David McKenzie1433 at http://blogs.worldbank.org/impactevaluationsPower Calculations for Regression Discontinuity Evaluations: Part 2
http://blogs.worldbank.org/impactevaluations/power-calculations-regression-discontinuity-evaluations-part-2
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>
<a href="http://blogs.worldbank.org/impactevaluations/power-calculations-regression-discontinuity-evaluations-part-1" rel="nofollow">Part 1</a> covered the case where you have no data. Today’s post considers another common setting where you might need to do RD power calculations.<br /><strong><em>Scenario 2 (SCORE DATA AVAILABLE, NO OUTCOME DATA AVAILABLE): </em></strong><em>the context here is that assignment to treatment has already occurred via a scoring threshold rule, and you are deciding whether to try and collect follow-up data. For example, referees may have given scores for grant applications, and proposals with scores above a certain level got funded, and now you are deciding whether to collect outcomes several years later to see whether the grants had impacts; or kids may have sat a test to get into a gifted and talented program, and now you want to see whether to collect to data on how these kids have done in the labor market.</em><br /><br />
Here you have the score data, so don’t need to make assumptions about the correlation between treatment assignment and the score, but can use the actual correlation in your data. However, since the optimal bandwidth will differ for each outcome examined, and you don’t have the outcome data, you don’t know what the optimal bandwidth will be.<br />
In this context you can use the design effect discussed in <a href="http://blogs.worldbank.org/impactevaluations/power-calculations-regression-discontinuity-evaluations-part-1" rel="nofollow">my first blog post</a> with the actual correlation. You can then check with the full sample to see if you would have sufficient power if you surveyed everyone, and make an adjustment for choosing an optimal bandwidth within this sample using an additional multiple of the design effect as discussed previously. Or you can simulate outcomes and use the simulated outcomes along with the actual score data (see next post).</p>
</div></div></div>Thu, 08 Sep 2016 12:06:00 +0000David McKenzie1431 at http://blogs.worldbank.org/impactevaluationsPower Calculations for Regression Discontinuity Evaluations: Part 1
http://blogs.worldbank.org/impactevaluations/power-calculations-regression-discontinuity-evaluations-part-1
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>
I haven’t done a lot of RD evaluations before, but recently have been involved in two studies which use regression discontinuity designs. One issue which comes up is then how to do power calculations for these studies. I thought I’d share some of what I have learned, and if anyone has more experience or additional helpful content, please let me know in the comments. I thank, without implication, Matias Cattaneo for sharing a lot of helpful advice.<br /><br /><strong>One headline piece of information that I’ve learned is that RD designs have way less power than RCTs for a given sample, and I was surprised by how much larger the sample is that you need for an RD.</strong><br />
How to do power calculations will vary depending on the set-up and data availability. I’ll do three posts on this to cover different scenarios:<br /><br /><strong><em>Scenario 1 (NO DATA AVAILABLE): </em></strong><em>the context here is of a prospective RD study. For example, a project is considering scoring business plans, and those above a cutoff will get a grant; or a project will be targeting for poverty, and those below some poverty index measure will get the program; or a school test is being used, with those who pass the test then being able to proceed to some next stage. </em><br /><em>The key features here are that, since it is being planned in advance, you do not have data on either the score (running variable), or the outcome of interest. The objective of the power calculation is then to see what size sample you would need to have in the project and survey, and whether it is worth you going ahead with the study. Typically your goal here is to get some sense of order of magnitude – do I need 500 units or 5000?</em></p>
</div></div></div>Tue, 06 Sep 2016 12:09:00 +0000David McKenzie1430 at http://blogs.worldbank.org/impactevaluationsTools of the Trade: The Regression Kink Design
http://blogs.worldbank.org/impactevaluations/tools-trade-regression-kink-design
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>
Regression Discontinuity designs have become a popular addition to the impact evaluation toolkit, and offer a <a href="http://blogs.worldbank.org/impactevaluations/regression-discontinuity-porn" rel="nofollow">visually appealing</a> way of demonstrating the impact of a program around a cutoff. An extension of this approach which is growing in usage is the <strong>regression kink design(RKD)</strong>. I’ve never estimated one of these, and am not an expert, but thought it might be useful to try to provide an introduction to this approach along with some links that people can then follow-up on if they want to implement it.</p>
</div></div></div>Mon, 08 Feb 2016 14:22:00 +0000David McKenzie1358 at http://blogs.worldbank.org/impactevaluationsFrom my mailbox: should I work with only a subsample of my control group if I have big take-up problems?
http://blogs.worldbank.org/impactevaluations/my-mailbox-should-i-work-only-subsample-my-control-group-if-i-have-big-take-problems
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">Over the past month I’ve received several versions of the same question, so thought it might be useful to post about it.<br />
<br />
Here’s one version:<br /><em>I have a question about an experiment in which we had a very big problem getting the individuals in the treatment group to take-up the treatment. Therefore we now have a treatment much smaller than the control. For efficiency reasons does it still make sense to survey all the control group, or should we take a random draw in order to have an equal number of treated and control?</em><br />
<br />
And another version</div></div></div>Mon, 20 Jul 2015 13:57:00 +0000David McKenzie1287 at http://blogs.worldbank.org/impactevaluationsAllocating Treatment and Control with Multiple Applications per Applicant and Ranked Choices
http://blogs.worldbank.org/impactevaluations/allocating-treatment-and-control-multiple-applications-applicant-and-ranked-choices
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">This came up in the context of work with Ganesh Seshan designing an evaluation for a computer training program for migrants. The training program was to be taught in one 3 hour class per week for several months. Classes were taught Sunday, Tuesday and Thursday evenings from 5-8 pm, and then there were four separate slots on Friday, the first day of the weekend. So in total there were 7 possible sessions people could potentially attend. However, most migrants would prefer to go on the weekend, and many would not be able to attend on particular days of the week.</div></div></div>Tue, 07 Jul 2015 04:54:00 +0000David McKenzie1283 at http://blogs.worldbank.org/impactevaluationsEndogenous stratification: the surprisingly easy way to bias your heterogeneous treatment effect results and what you should do instead
http://blogs.worldbank.org/impactevaluations/endogenous-stratification-surprisingly-easy-way-bias-your-heterogeneous-treatment-effect-results-and
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>
A common question of interest in evaluations is “which groups does the treatment work for best?” A standard way to address this is to look at heterogeneity in treatment effects with respect to baseline characteristics. However, there are often many such possible baseline characteristics to look at, and really the heterogeneity of interest may be with respect to outcomes in the absence of treatment. Consider two examples:<br />
A: A vocational training program for the unemployed: we might want to know if the treatment helps more those who were likely to stay unemployed in the absence of an intervention compared to those who would have been likely to find a job anyway.<br />
B: Smaller class sizes: we might want to know if the treatment helps more those students whose test scores would have been low in the absence of smaller classes, compared to those students who were likely to get high test scores anyway.<br /></div></div></div>Mon, 16 Mar 2015 13:20:00 +0000David McKenzie1239 at http://blogs.worldbank.org/impactevaluationsWhy is Difference-in-Difference Estimation Still so Popular in Experimental Analysis?
http://blogs.worldbank.org/impactevaluations/why-difference-difference-estimation-still-so-popular-experimental-analysis
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">David McKenzie pops out from under many empirical questions that come up in my research projects, which has not yet ceased to be surprising every time it happens, despite <a href="https://sites.google.com/site/decrgdmckenzie/publications-by-topic" rel="nofollow">his prolific production</a>. The last time it happened was a teachable moment for me, so I thought I’d share it in a short post that fits nicely under our “Tools of the Trade” tag.<br /><br /></div></div></div>Mon, 23 Feb 2015 14:23:00 +0000Berk Ozler1231 at http://blogs.worldbank.org/impactevaluationsTools of the Trade: a joint test of orthogonality when testing for balance
http://blogs.worldbank.org/impactevaluations/tools-trade-joint-test-orthogonality-when-testing-balance
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">This is a very simple (and for once short) post, but since I have been asked this question quite a few times by people who are new to doing experiments, I figured it would be worth posting. It is also useful for non-experimental comparisons of a treatment and a control group.<br /></div></div></div>Wed, 04 Feb 2015 14:22:00 +0000David McKenzie1222 at http://blogs.worldbank.org/impactevaluationsCurves in all the wrong places: Gelman and Imbens on why not to use higher-order polynomials in RD
http://blogs.worldbank.org/impactevaluations/curves-all-wrong-places-gelman-and-imbens-why-not-use-higher-order-polynomials-rd
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">A good regression-discontinuity can be a beautiful thing, as Dave Evans illustrates in a <a href="http://blogs.worldbank.org/impactevaluations/regression-discontinuity-porn" rel="nofollow">previous post</a>. The typical RD consists of controlling for a smooth function of the forcing variable (i.e. the score that has a cut-off where people on one side of the cut-off get the treatment, and those on the other side do not), and then looking for a discontinuity in the outcome of interest at this cut-off. A key practical problem is then how exactly to control for the forcing variable.<br /><br /></div></div></div>Mon, 08 Sep 2014 15:10:00 +0000David McKenzie1154 at http://blogs.worldbank.org/impactevaluationsTools of the Trade: Graphing Impacts with Standard Error Bars
http://blogs.worldbank.org/impactevaluations/tools-trade-graphing-impacts-standard-error-bars
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">This week I finally got around to learning how to make a graph which displays the means of different treatment groups for a range of outcomes, along with standard error bars to show whether there is a significant difference between groups. Here is an example:<br /><img alt="" src="http://blogs.worldbank.org/impactevaluations/files/impactevaluations/GraphingImpactsStata.jpg" style="height:375px; width:500px" /><br /></div></div></div>Sat, 08 Feb 2014 01:29:00 +0000David McKenzie1082 at http://blogs.worldbank.org/impactevaluationsTools of the trade: recent tests of matching estimators through the evaluation of job-training programs
http://blogs.worldbank.org/impactevaluations/tools-trade-recent-tests-matching-estimators-through-evaluation-job-training-programs
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even">Of all the impact evaluation methods, the one that consistently (and justifiably) comes last in the methods courses we teach is <em>matching</em>. We de-emphasize this method because it requires the strongest assumptions to yield a valid estimate of causal impact. Most importantly this concerns the assumption of <em>unconfoundedness</em>, namely that selection into treatment can be accurately captured solely as a function of observable covariates in the data.</div></div></div>Wed, 05 Jun 2013 13:44:00 +0000Jed Friedman997 at http://blogs.worldbank.org/impactevaluationsTools of the trade: when to use those sample weights
http://blogs.worldbank.org/impactevaluations/tools-of-the-trade-when-to-use-those-sample-weights
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p><span style="LINE-HEIGHT: 115%; FONT-SIZE: 11pt">In numerous discussions with colleagues I am struck by the varied views and confusion around whether to use sample weights in regression analysis (a confusion that I share at times). A <a href="http://www.nber.org/papers/w18859"><font color="#0000ff">recent working paper</font></a> by Gary Solon, Steven Haider, and Jeffrey Wooldridge aims at the heart of this topic. It is short and comprehensive, and I recommend it to all practitioners confronted by this question.</span></p></div></div></div>Wed, 13 Mar 2013 13:01:04 +0000Jed Friedman960 at http://blogs.worldbank.org/impactevaluations“Oops! Did I just ruin this impact evaluation?” Top 5 of mistakes and how the new Impact Evaluation Toolkit can help.
http://blogs.worldbank.org/impactevaluations/oops-did-i-just-ruin-this-impact-evaluation-top-5-of-mistakes-and-how-the-new-impact-evaluation-tool
<div class="field field-name-body field-type-text-with-summary field-label-hidden"><div class="field-items"><div class="field-item even"><p>On October 3rd, I sent out a survey asking people what was the biggest, most embarrassing, dramatic, funny, or other oops mistake they made in an impact evaluation. Within a few hours, a former manager came into my office to warn me: “Christel, I tried this 10 years ago, and I got exactly two responses.” </p></div></div></div>Wed, 12 Dec 2012 14:19:02 +0000Christel Vermeersch924 at http://blogs.worldbank.org/impactevaluations