Syndicate content

David McKenzie's blog

From my inbox: Three enquiries on winsorizing, testing balance, and dealing with low take-up

David McKenzie's picture

I’ve been travelling the past week, and had several people contact me with questions about impact evaluation while away. I figured these might come up again, and so I’d put up the questions and answers here in case they are useful for others.
Question 1: Winsorizing – “do we do this on the whole sample, or do we do it within treatment and control, baseline and follow-up?”
Winsorizing is commonly used to deal with outliers, for example, you might set all data points above the 99th percentile equal to the 99th percentile. It is key here that you don’t use different cut-offs for treatment and control. For example, suppose you have a treatment for businesses that makes 4 percent of the treatment group grow their sales massively. If you winsorize separately at the 95th percentile of the treatment distribution for the treatment group and at the 95th percentile of the control distribution for the control groups, you might end up completely missing the treatment effect. I think it makes sense to do this with separate cutoffs by survey round to allow for seasonal effects and so you aren’t winsorizing more points from one round than another (which could be the case if you used the same global cutoffs for all rounds).

Get more farmers off their farms

David McKenzie's picture
Justin Wolfers had a nice piece in the Upshot about new work on how growing up in a bad neighborhood has long-term negative consequences for kids. The key point of the new work is that the benefits of moving from bad neighborhoods may be particularly high for kids whose parents won’t voluntarily move, but only move because their public housing is demolished.

Weekly links March 25: nudges, helpful Stata commands, saving more and earning more, and more…

David McKenzie's picture

Weekly links March 18: R&D credits, Indian internal migration, Accra’s slums, and more…

David McKenzie's picture

The top 8 active researchers in developing countries according to RePEc

David McKenzie's picture

The Research Papers in Economics (RePEc) database has over 46,000 researchers registered. Each month they send out rankings based on downloads, citations, and other metrics. Their ranking of economists based on publications in the last 10 years is topped by some of the best known names in economics (the top 5 are Acemoglu, Shleifer, Heckman, Barro and Rogoff). But looking through their top 100 (as of January 2016), I found 8 of the top 100 researchers are based in developing countries (taking World Bank client countries as “developing countries” for this purpose). Since I was only familiar with the work of one of these eight individuals, I thought it might be of interest to note some of this work going on outside of the usual top schools. I contacted the authors to ask them also what idea or work they were most proud of, or would most like to draw policy attention to.

Weekly links March 11: Defining a large effect size, helping job-seekers, a field research guide, Papua New Guineans, and more…

David McKenzie's picture
  • What is a large effect size? In the Huffington Post, Robert Slavin educational research and finds average effect sizes differ depending on whether the sample size is small or large, and non-experimental (matching) or randomized – and comes up with the table below. The average effect size for a randomized evaluation on a large sample is 0.11 S.D. compared to 0.32 S.D. for a matching-based evaluation on a small sample. He suggests effect sizes therefore need to be “graded on a curve”, with what constitutes big depending on the method of evaluation and the size of the sample.(Although also recall our posts on the problems of using S.D. to compare effect sizes in the first place).

What does Alwyn Young’s paper mean for analysis of experiments?

David McKenzie's picture

I’ve been asked several times what I think of Alwyn Young’s recent working paper “Channelling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results”. After reading the paper several times and reflecting on it, I thought I would share some thoughts, with a particular emphasis on what I think it means for people analyzing experimental data going forward.

Weekly links March 4: all measures suck, make your work group thrive, lean research, and more…

David McKenzie's picture

Weekly links February 19: field experiments in accounting, and the legal profession’s resistance to RCTs, when won’t scientists move?, and more…

David McKenzie's picture
  •  On the 3ie blog, Manny Jimenez notes the glaring omission of evaluations of private sector programs during last year’s “Year of Evaluation”
  • On the Conversation, ideas42 shares what insights from behavioral economics tell us about how to help people with their finances
  • Floyd and List on using field experiments in accounting and finance: with a recommendation to work in developing countries because the potential for randomization is higher, the firms aren’t as big, and the setting is less complex.
  • Greiner and Matthews on the (limited) use of RCTs in the legal profession in the U.S. “The intensity of the United States legal profession’s resistance to the RCT is such that, viewed individually, each law RCT appears to be a unicorn, a magical creation with no origin story that appears briefly in a larger setting and then fades away.” They find more than 50 RCTs, but note that “what looking we were able to do generated no evidence that the results of an RCT in the United States legal profession were actually used, in the sense that a program or policy changed because of the study’s results.”…” even when researchers have been able to field RCTs in the United States legal profession, lawyers and judges sometimes undermined them. And the lawyers and judges who did so appeared to have a common motive: certainty as to the “right” answer.”

Weekly links February 12: All-knowing gods and cheating, Kenya lab experiments, Dads on leave, and more…

David McKenzie's picture