Randomized controlled trials are kind of a big deal in development economics right now. A recent article in The Economist shows a sizeable rise in the use of RCTs in economics overall over the last 15 years, and recent analysis by David McKenzie shows that RCTs make up a large minority of development papers in top journals (see the figures below).
Source: The Economist on the left; McKenzie (2016) on the right.
In his new book Experimental Conversations: Perspectives on Randomized Trials in Development Economics, Tim Ogden has assembled interviews with a distinguished group that interacts with RCTs in every imaginable way: you have those who pioneered the use of the method in development economics, the next generation of researchers, the chief critics of the method, and consumers of development RCTs at organizations like GiveWell, the Ford and Grameen Foundations, and the Center for Global Development. You also hear from one broader observer of economics as a field (Tyler Cowen) and one of the scholars who pioneered the use of RCTs in U.S. policy (Judy Gueron), to give added perspective.
Dean Karlan and Jacob Appel have a new book out called Failing in the Field: What we can learn when field research goes wrong. It is intended to highlight research failures and what we can learn from them, sharing stories that otherwise might otherwise be told only over a drink at the end of a conference, if at all. It draws on a number of Dean’s own studies, as well as those of several other researchers who have shared stories and lessons. The book is a good short read (I finished it in an hour), and definitely worth the time for anyone involved in collecting field data or running an experiment.
I was recently at the Novafrica conference in Lisbon, where one of the keynote talks was given by Stefan Dercon. He based it around a newly released short book he has written with Daniel Clarke, called Dull Disasters (open access version). The title is meant to indicate both the aim to make dealing with disasters a dull event rather than media circus, as well as to discuss ways to ‘dull’ or reduce the impact of disasters.
Stefan started his talk by noting that disaster relief may well be the part of the whole international development and humanitarian system that is the least efficient and has had the least research on it. The book starts by noting the predictability of responses “every time a natural disaster hits any part of the world, the newspaper headlines ten days later can be written in advance: ‘why isn’t the response more coordinated?’. He gives the examples of the responses to the earthquakes in Nepal and Haiti, to Hurricane Katrina, and to Ebola as examples. But he then notes the crux of the problem “…The truth is everybody argues for coordination but nobody likes to be coordinated”.
Angela Duckworth’s new book Grit: The Power of Passion and Perseverance has been launched with great fanfare, reaching number two on the NY Times Nonfiction bestseller list. She recently gave a very polished and smooth book launch talk to a packed audience at the World Bank, and is working with World Bank colleagues on improving grit in classrooms in Macedonia. Billed as giving “the secret to outstanding achievement” I was interested in reading the book as both a researcher and a parent. I thought I’d continue my book reviews series with some thoughts on the book.
Over the summer I’ve been slowly working my way through the new book Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction by Guido Imbens and Don Rubin. It is an introduction in the sense that it is 600 pages and still doesn’t have room for difference-in-differences, regression discontinuity, synthetic controls, power calculations, dealing with attrition, dealing with multiple time periods, treatment spillovers, or many other topics in causal inference (they promise a volume 2). But not an introduction in that it is graduate level and I imagine would be very confusing if you had no previous exposure to causal inference. So I thought I’d share some thoughts on this book for our readers.
Alan Gerber and Don Green, political scientists at Yale and Columbia respectively, and authors of a large number of voting experiments, have a new textbook out titled Field Experiments: Design, Analysis, and Interpretation. This is noteworthy because despite the massive growth in field experiments, to date there hasn’t been an accessible and modern textbook for social scientists looking to work in, or better understand, this area. The new book is very good, and I definitely recommend anyone working in this area to read at least key chapters.
The new book Uncontrolled by Jim Manzi has attracted a lot of recent press (e.g. see Markus’ recent post for discussion of David Brooks’ take, or this piece in the Atlantic), and makes the argument that there should be a lot more randomized experiments of social programs. I was therefore very interested to order a copy and just finished reading it.
- Book reviews