· The July AER has the Nobel lectures of Banerjee, Duflo and Kremer. Abhijit’s starts “When, some twenty-five years ago, I first started doing RCTs, the most common reaction was one of puzzled tolerance… as the more candid among them put it, “are RCTs economics?”” – he then talks about how to think about generalization; and Abhijit addresses whether RCTs answers big questions – noting that “the definition of big questions is itself the product of a particular understanding of economics. The implicit and sometimes explicit premise is that the macroeconomy is key; in a market economy individuals are supposed to do the best they can within the constraints imposed by macroeconomic policies and the tax system. And yet, the evidence from many years of work in development economics suggests that this is not the case; markets routinely fail to deliver efficient outcomes and so do nonmarket institutions, like schools and hospitals run by governments and NGOs. For a development economist the big questions are often whether people are realizing their full potential and, if not, what would enable them to do so”.
In her lecture, Esther talks about how RCTs influence policy “each experiment is like a dot on a pointillist painting: on its own it does not mean much, but the accumulation of experimental results eventually paints a picture that helps make sense of the world, and guide policy. It is the accretion of results that makes sense and justifies the whole enterprise.” – she then walks through how RCTs have influenced microfinance and the path to building and scaling their teaching at the right level approach.
Michael’s lecture emphasizes the relationship between RCTs and innovation. One point he makes is that “experiments are inherently collaborative, requiring us to work with practitioners in governments and civil society, teams of survey enumerators, and specialists in other fields such as education, health, agriculture, or psychology. This collaboration allows the ideas and experiences of a much broader set of people to enter economic research” and notes “I initially thought of experiments primarily as evaluation but now see many experiments as more akin to beta tests, useful in developing new products or policies and not just studying existing ones.”
· BITSS has slides and videos from two virtual sessions on research reproducibility held at the WEAI conference – one on teaching reproducibility in the classroom, and one from lessons learned so far from the AEA data editor.
· Nick Bloom policy brief on working from home, including his advice for anyone crafting working-from-home policies for a post-Covid world (when we get there). “the best advice is plan to work from home about 1 to 3 days a week. It’ll ease the stress of commuting, allow for employees to use their at-home days for quiet, thoughtful work, and let them use their in-office days for meetings and collaborations….. I saw similarly large variations in views in my China experiment, which often changed over time. Employees would try WFH and then discover after a few months it was too lonely or fell victim to one of the three enemies of the practice — the fridge, the bed, and the television — and would decide to return to the office.”
· Andrew Gelman on understanding the “average” treatment effect – “What I want to talk about today is interpreting that number. It’s something that came up in the discussion of growth mindset. The reported effect size was 0.1 points of grade point average (GPA). GPA is measured on something like a 1-4 scale, so 0.1 is not so much; indeed one commenter wrote, “I hope all this fuss is for more than that. Ouch.” Actually, though, an effect of 0.1 GPA point is a lot. One way to think about this is that it’s equivalent to a treatment that raises GPA by 1 point for 10% of people and has no effect on the other 90%. That’s a bit of an oversimplification, but the point is that this sort of intervention might well have little or no effect on most people. In education and other fields, we try lots of things to try to help students, with the understanding that any particular thing we try will not make a difference most of the time. If mindset intervention can make a difference for 10% of students, that’s a big deal.”
· IPA’s RECOVR research hub now has results and analysis up from rapid response COVID-19 surveys in Peru, Pakistan, Mexico, Bangladesh, Ecuador, Ethiopia and Senegal, with more to come.
Join the Conversation