Syndicate content

David McKenzie's blog

From my email correspondence: how to randomize in the field

David McKenzie's picture

I received this email from one of our readers:
“I don't know as much about list experiments as I'd like.  Specifically, I have a question about administering them and some of the blocking procedures.  I read a few of the pieces you recently blogged about and have an idea for one of my own; however, here's what I'd like to know: when you send your interviewers or researchers out into the field to administer a list experiment, how do you ensure that they are randomly administering the control and treatment groups? (This applies to a developing country as opposed to a survey administered over the phone.) “
This question of how to randomize questions (or treatments) on the spot in the field is of course a much more general one. Here’s my reply:

Endogenous stratification: the surprisingly easy way to bias your heterogeneous treatment effect results and what you should do instead

David McKenzie's picture

A common question of interest in evaluations is “which groups does the treatment work for best?” A standard way to address this is to look at heterogeneity in treatment effects with respect to baseline characteristics. However, there are often many such possible baseline characteristics to look at, and really the heterogeneity of interest may be with respect to outcomes in the absence of treatment. Consider two examples:
A: A vocational training program for the unemployed: we might want to know if the treatment helps more those who were likely to stay unemployed in the absence of an intervention compared to those who would have been likely to find a job anyway.
B: Smaller class sizes: we might want to know if the treatment helps more those students whose test scores would have been low in the absence of smaller classes, compared to those students who were likely to get high test scores anyway.

Weekly links March 13: Soap Operas, New Data Access, Daylight Saving and Goofing Off, and more…

David McKenzie's picture

Weekly links March 6: The future of evaluation, publishing negative/null results, Science publishes a non-experimental study, and more…

David McKenzie's picture

Blog links February 27th: What counts as a nudge, being efficient, debiasing, and more…

David McKenzie's picture
  • How to be efficient – excellent advice from Dan Ariely In particular I liked “A calendar should be a record of anything that needs to get done — not merely of interruptions like meetings and calls.” and “frequent email checks can temporarily lower your intelligence more than being stoned”

Blog links February 20: understandability, the replication debate continues, thoughts on the “Africa problem in economics”, and more…

David McKenzie's picture
  • A third paper in 3ie’s internal replication series is now out – along with a response from the authors (Stefan Dercon and co-authors). The author’s response is interesting for some of the issues with such replication exercises that it raises “At the outset of this exercise, we were enthusiastic, but possibly naive participants. At its end, we find it hard to shake the feeling that an activity that began as one narrowly focused on pure replication morphed – once our original findings were confirmed (save for a very minor programming error that we willingly confess to) - into a 14 month effort to find an alternative method/structure of researching the problem that would yield different results.” (See also Berk’s posts on the previous replications).
  • On the Let’s Talk Development blog, Emanuela Galasso reflects on the Chile Solidario program and how social programs can move from social protection to productive inclusion.
  • From Cornell’s Economics that really matters blog – conducting fieldwork in a conflict zone in Mexico.

Evaluating an Argentine regional tourism policy using synthetic controls: tan linda que enamora?

David McKenzie's picture
In 2003, the Argentine province of Salta launched a new tourism development policy with the explicit objective of boosting regional development. This included improving tourism and transport infrastructure, restoring historical and cultural heritage areas, tax credits for the construction and remodeling of hotels, and a major promotion campaign at the national and international levels.

Weekly links February 13: Cricket, Harry Potter, vaccine initiatives, and more…

David McKenzie's picture

Weekly links Feb 6, 2015: research transparency, reliable 9% response rates, protests as a constraint to power, and more…

David McKenzie's picture
  • Ted Miguel is teaching a course on research transparency methods in the social sciences. Berkeley is posting the lectures on YouTube. Lecture 1 is now up.
  • Chris Blattman on a paper looking at how the tendency to publish null results varies by scientific field.
  • In Science, Jorge Guzman and Scott Stern on predicting entrepreneurial quality
  • Ben Olken’s forthcoming JEP paper on pre-analysis plans in economics: this is a very nuanced and well-written piece, discussing both pros and cons – it notes a reaction I am increasingly persuaded by, which is that RCTs don’t really seem to have a lot of data-mining problems in the first place…and also that “most of these papers are too complicated to be fully pre-specified ex-ante”…main conclusion is benefits are highest from pre-specifying just a few key primary outcomes, and for specifying heterogeneity analysis and econometric specifications – less clear for specifying causal chain/mechanisms/secondary outcomes which can too easily get too complicated/conditional.

Tools of the Trade: a joint test of orthogonality when testing for balance

David McKenzie's picture
This is a very simple (and for once short) post, but since I have been asked this question quite a few times by people who are new to doing experiments, I figured it would be worth posting. It is also useful for non-experimental comparisons of a treatment and a control group.