Published on Development Impact

Weekly links March 23: recall revisited, Imbens critiques the Cartwright-Deaton RCT critiques, a new source for learning causal inference, and more...

This page in:
  • The bias in recall data revisited: On the Ifpri blog, Susan Godlonton and co-authors discuss their work on “mental anchoring” - the tendency to rely too heavily on only one piece of information (the "anchor") when making a decision – they use panel data where they ask people about both current outcomes, and to recall outcomes from a year ago. They find that people use their current outcomes as an anchor in trying to recall what happened a year ago “a $10 increase reported in the 2013 concurrent report for monthly income was associated with a $7.50 increase in the recalled monthly income for 2012”
  • Scott Cunningham posts his “mixtape” on teaching causal inference -  a textbook that may be of particular interest to many of our readers because of its applied focus, use of Stata examples and Stata datasets, and also coverage of some topics not found in many of the alternatives (e.g. directed acyclical graphs, synthetic controls).
  • A school is not a factory – Dave Evans discusses Roland Fryer’s work on specialization of teachers at the early levels of schooling on Let’s Talk Development.
  • Guido Imbens offers a forceful response to the Cartwright and Deaton critiques of RCTs: 1) “Because DC2017 do not make this distinction between design and analysis explicit, it is unclear to me whether it is the design of randomized experiments they take issue with, or the analysis, or both, and, more specifically, what Cartwright and Deaton see as the alternatives at each stage”; 2) “DC2017 also raise concerns regarding inference in small samples. I think these are largely overblown. One of the advantages of randomization is that it makes the analyses more robust to changes in specification than they would be in observational studies. As a result, I think the concerns with using refinements to confidence intervals based on the literature on Behrens-Fisher problem, raised here and in D2010, are generally misplaced”; 3) “researchers are often motivated by the many (i.e., thousands of) large scale randomized experiments run in big tech companies such as Google, Facebook and Amazon, as well as in smaller ones, where high stakes decisions are systematically based on such experiments. There is widespread agreement in these settings regarding the fundamental value of randomization and experimentation for decision making, with a deep suspicion of having decisions driven by what DC2017 charitably call “expert knowledge” (DC2017, p.1) and Kohavi et al. [2007] call, in more colorful language, the HIPPO (Highest Paid Person’s Opinion)”...he goes on to note the recent advances in ways to maximize the benefits of experiments that are ignored by DC; 4) “Despite the suggestions in DC2017, internal and external validity are well-understood concepts, and it would be helpful if DC2017 had used them in the standard manner rather than proposing new terms”
  • Job Openings: The Development Impact Evaluation (DIME), a unit within the World Bank’s Research Group, is launching a recruitment process for Research Assistants and Field Coordinators to work on a large portfolio of impact evaluation studies around the world. More details in this link.

Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000