· In Nature, John List has a Perspective on the need to augment efficacy trials with relevant tests of scale. Some of the piece will be familiar to those who have read his Voltage Effect book, as he summarizes different reasons why things might not scale. But in this piece he also argues that there are policies which he calls high fixed costs with impatient decision-makers (HFIDs) in which rather than test efficacy first and then scale, one needs to look at scaling from the start “, I denote this approach as option C thinking, which effectively asks: if I want to scale up this idea, what extra information do I need beyond an A/B test? To answer this question, I argue that we must include a scalable version of the studied programme alongside the best-case A/B test. Leveraging option C thinking in our initial designs is not simply adding a new treatment arm; rather, it augments the traditional approach with an experimental enhancement that produces the type of policy-based evidence that the science of scaling demands from the beginning.” As an example, he notes that in his work on early pre-school education in Chicago, the efficacy test might have highly motivated and well-trained and compensated teachers, but “If we backward-induct from the reality that, at scale in thousands of schools, our programme would not have its dream budget or a dream applicant pool of teachers to choose from, then several potential issues emerge. We therefore designed our experiment to examine whether our curriculum could work with teachers who have varying abilities. That is, we employed teachers who would typically come and work in a school district like Chicago Heights. As we prepared to open CHECC, we hired our 30 teachers and administrators the same way the Chicago Heights public schools would, from the same candidate pool and with the same salary caps”.
· On the CGD blog, Lee Crawfurd and Helen Dempster do some ocular econometrics to suggest there could be new evidence for brain gain going on in Nigeria: the U.K. government changed its policies to make it much easier for health workers to come to U.K., and the number of Nigerian-born nurses moving to the U.K. quadrupled. They show this is accompanied by an increase in the number passing nursing exams in Nigeria.
· After seeing several people recommending Asteriskmag,a new quarterly journal, I finally got around to reading several articles in the latest issue on “Mistakes”. Several seem very interesting for people interested in development economics. John Yasuda discusses China’s experimental policy regime and how it worked a lot better for seeking economic targets like GDP growth than for regulatory areas like environmental protection and food safety – and how not wanting to be the bearer of bad news makes coordination worse. The same issue also has Todd Moss on why solar isn’t scaling in Africa and lessons from World Bank/IFC efforts here, and Justin Sandefur on what economists got wrong about funding antiretroviral drugs to fight HIV in Africa. Coming soon in the same issue is a piece by Saarthak Gupta titled “The Ruin of Mumbai” which is teased as about the worst land use policy in the world. The previous issue is on measurement, and includes this discussion of the debate in interpreting an RCT on colonoscopy effectiveness because of confusions around ITTs vs LATEs vs per-protocol effects, as well as Ranil Dissanayake providing a history of global poverty measurement.
· On the Econ that matters blog, Qinyou Hu looks at the effect of a program in China to reduce bullying by improving students empathy – and then using social networks data, who should you target for these programs – concluding the best friends of bullies is particularly useful (assuming they are also not bullies I guess).
· Andrew Gelman on why honesty and transparency are not enough: “reproducibility is great, but if a study is too noisy (with the bias and variance of measurements being large compared to any persistent underlying effects), that making it reproducible won’t solve those problems… Lots of researchers are honest and transparent in their work but still do bad research. I wanted to be able to say that the research is bad without that implying that I think they are being dishonest.”
Join the Conversation