I received this email from one of our readers:
“I don't know as much about list experiments as I'd like. Specifically, I have a question about administering them and some of the blocking procedures. I read a few of the pieces you recently blogged about and have an idea for one of my own; however, here's what I'd like to know: when you send your interviewers or researchers out into the field to administer a list experiment, how do you ensure that they are randomly administering the control and treatment groups? (This applies to a developing country as opposed to a survey administered over the phone.) “
This question of how to randomize questions (or treatments) on the spot in the field is of course a much more general one. Here’s my reply:
David McKenzie's blog
I received this email from one of our readers:
A common question of interest in evaluations is “which groups does the treatment work for best?” A standard way to address this is to look at heterogeneity in treatment effects with respect to baseline characteristics. However, there are often many such possible baseline characteristics to look at, and really the heterogeneity of interest may be with respect to outcomes in the absence of treatment. Consider two examples:
A: A vocational training program for the unemployed: we might want to know if the treatment helps more those who were likely to stay unemployed in the absence of an intervention compared to those who would have been likely to find a job anyway.
B: Smaller class sizes: we might want to know if the treatment helps more those students whose test scores would have been low in the absence of smaller classes, compared to those students who were likely to get high test scores anyway.
- On the IGC blog, Eliana La Ferrera summarizes different work on fighting poverty with soap operas
- A new repository for data from IPA/J-PAL RCTs. My questionnaires and datasets are in the World Bank’s open data library – and cross-linked from my webpage.
- Dave Evan’s post on systematic reviews last week has had a long series of comments. This week separate response blog posts by a 3ie team and by Langer, Haddaway and Land on the Africa Evidence Network
- Since we just changed to daylight savings time in the US – the LA Times rounds up a set of research results which look at the impacts of daylight savings changes including “Springing forward prompts people to waste time on the Internet”
- IPA/J-PAL policy bulletin summarizing 7 microcredit RCTs “where credit is due” – very nice set of Tables and Figures that summarize the study features and results
- Chris Blattman rediscovers one of my favorite blog posts – on managers vs makers and how one meeting can kill your whole day
- Eval Blog’s 10 predictions for the future of evaluation “Most evaluations will be internal”, “Evaluation reports will become obsolete due to real-time data” and more…
- Vox on PLOS One’s new section for negative studies: a collection of negative, null, and inconclusive studies title “missing pieces” including a failure to replicate the idea that self-control gets depleted
- How to be efficient – excellent advice from Dan Ariely In particular I liked “A calendar should be a record of anything that needs to get done — not merely of interruptions like meetings and calls.” and “frequent email checks can temporarily lower your intelligence more than being stoned”
- A third paper in 3ie’s internal replication series is now out – along with a response from the authors (Stefan Dercon and co-authors). The author’s response is interesting for some of the issues with such replication exercises that it raises “At the outset of this exercise, we were enthusiastic, but possibly naive participants. At its end, we find it hard to shake the feeling that an activity that began as one narrowly focused on pure replication morphed – once our original findings were confirmed (save for a very minor programming error that we willingly confess to) - into a 14 month effort to find an alternative method/structure of researching the problem that would yield different results.” (See also Berk’s posts on the previous replications).
- On the Let’s Talk Development blog, Emanuela Galasso reflects on the Chile Solidario program and how social programs can move from social protection to productive inclusion.
- From Cornell’s Economics that really matters blog – conducting fieldwork in a conflict zone in Mexico.
- One of the best descriptions of what the productivity term A is in the production function – Growth Economics illustrates through Universal Studios’ Harry Potter attraction.
- At the CGD blog – impact of the GAVI vaccine initiative on vaccination rates in poor countries – using a country GNI per capita threshold for eligibility.
- Ted Miguel is teaching a course on research transparency methods in the social sciences. Berkeley is posting the lectures on YouTube. Lecture 1 is now up.
- Chris Blattman on a paper looking at how the tendency to publish null results varies by scientific field.
- In Science, Jorge Guzman and Scott Stern on predicting entrepreneurial quality
- Ben Olken’s forthcoming JEP paper on pre-analysis plans in economics: this is a very nuanced and well-written piece, discussing both pros and cons – it notes a reaction I am increasingly persuaded by, which is that RCTs don’t really seem to have a lot of data-mining problems in the first place…and also that “most of these papers are too complicated to be fully pre-specified ex-ante”…main conclusion is benefits are highest from pre-specifying just a few key primary outcomes, and for specifying heterogeneity analysis and econometric specifications – less clear for specifying causal chain/mechanisms/secondary outcomes which can too easily get too complicated/conditional.