- Pew research on “what we learned about surveying with mobile apps”: “Immediate responses and feedback can be helpful and revealing” but “App response rates were lower than Web rates overall and for each of the 14 surveys we conducted”
- Freakonomics asks “How do we know what really works in healthcare?” – a podcast about RCTs. I like this comment from a doctor about scaling up “I think Medicare’s comment was that it’s really hard to do. We’re not sure we could scale it. Well, we f***ing scaled open heart surgery. We scaled separating Siamese twins. We scaled transplanting hearts and lungs, curing complex cancers. We’re sequencing the human genome. You’re telling me we can’t have a nurse go out and check on your mom or grandmother in a highly organized, well-structured, well-trained intervention for which someone’s already doing it for hundreds and hundreds of patients every day?”
Today in the Upshot, Justin Wolfers heavily criticizes a recent study that has received lots of media attention claiming that child outcomes are barely correlated with the time that parents spend with their children. He writes:
Context: you are randomly selecting people for some program such as a training program, transfer program, etc. in which you expect less than 100% take-up of the treatment from those assigned to treatment. You are relying on an oversubscription design, in which more people apply for the course/program than you have slots.
- Does shaming people to pay taxes work? Yes according to an experiment in the U.S., but only if they don’t owe too much. (h/t @dinapomeranz)
- Chris Blattman offers his take on “Does Economics have an Africa problem?” – is it just me, or is is this whole debate a bit too Africa-centric? Economics has at least as much a Middle East problem, or Eastern Europe problem, or East Asia problem – in my view more if we compare the amount of research activity devoted to these other regions.
- Sana Rafiq discusses how behavioral biases affect our survey questions on the Let’s Talk Development blog, in the context of trying to replicate some of Sendhil Mullainathan’s scarcity work: when asking whether people would travel across town to get a bargain, “There is no guarantee that the product will still be there once I go across town. It’s very likely that the product is gone by the time I get there.” Of course! By assuming the availability of the product, we had let our own implicit biases, based on our mental models, influence the design of the question.”
Bruce Wydick on the Impact of giving away TOMS Shoes: He gives kudos to TOMS for being open for evaluation and being responsive to findings, but what caught my eye was this observation: "The bad news is that there is no evidence that the shoes exhibit any kind of life-changing impact,..."
- development impact links
I received this email from one of our readers:
“I don't know as much about list experiments as I'd like. Specifically, I have a question about administering them and some of the blocking procedures. I read a few of the pieces you recently blogged about and have an idea for one of my own; however, here's what I'd like to know: when you send your interviewers or researchers out into the field to administer a list experiment, how do you ensure that they are randomly administering the control and treatment groups? (This applies to a developing country as opposed to a survey administered over the phone.) “
This question of how to randomize questions (or treatments) on the spot in the field is of course a much more general one. Here’s my reply:
A common question of interest in evaluations is “which groups does the treatment work for best?” A standard way to address this is to look at heterogeneity in treatment effects with respect to baseline characteristics. However, there are often many such possible baseline characteristics to look at, and really the heterogeneity of interest may be with respect to outcomes in the absence of treatment. Consider two examples:
A: A vocational training program for the unemployed: we might want to know if the treatment helps more those who were likely to stay unemployed in the absence of an intervention compared to those who would have been likely to find a job anyway.
B: Smaller class sizes: we might want to know if the treatment helps more those students whose test scores would have been low in the absence of smaller classes, compared to those students who were likely to get high test scores anyway.