Published on Development Impact

Weekly links June 23: STEM role models predicting, multiple tests and complex interventions, deadlines approaching, and more…

This page in:

·       Predict the impacts of a STEM role models intervention at scale: Along with a big group of co-authors, I’ve been working on several overlapping evaluations with high school students in their last couple of years of school in Ecuador – we implemented video-based role model interventions focused on role models in STEM to over 47,000 students in over 1,100 schools. We would love it if anyone interested would offer their forecasts of the results through the social science prediction platform – you will get a brief overview of the experiment, and then be asked to give predictions of impacts for boys and girls of how male and female role models affect attitudes, intentions, and preferences for whether students continue to university and what subjects they will study.

·       On the IPA blog, Sofía Granados, Alejandra Rivera and Laura Vargas Rueda discuss their experiences using Whatsapp for remote data collection in Colombia. Includes some useful tips such as Whatsapp numbers tending to stay stable even when people change phone numbers, and using a verified business Whatsapp account to boost trust.

·       I seem to end up recommending an episode of Scott Cunningham’s mixtape podcast almost weekly at the moment, but he is doing a great job on these – the podcast with Susan Athey is great for getting insights into how she made career shifts in topics, hearing how even incredibly smart people doubt themselves a lot, what economists have to contribute to tech, and a lot more, including the early lack of interest from many economists in machine learning methods and their use in tech “then applied researchers lectured me and they were like, "You're supposed to start with a question. You're not supposed to start with the data." "The god of economics is supposed to tell you what your question is and you don't use the data to tell you what your question is." There was almost like a moral superiority of hypothesis driven, empirical work. Which I'm not saying that is wrong. it's just that also you can learn from your data.”

·       These experiments could lift millions out of dire poverty is the headline in a Nature news feature that discusses a bunch of cash transfer and graduation experiments and quotes Markus among several other development economists.

·       In the BMJ, Jef Leroy and co-authors offer suggestions for strengthening RCTs of complex interventions. It is notable for suggesting a different way of thinking about primary and secondary outcomes and multiple hypothesis testing than we typically do in economics. “Trials of complex interventions evaluate the impact of interventions with several interconnected components designed to affect multiple outcomes through one or several mechanisms…Challenges often relate to the number of intermediate and final outcomes that the trialist can assess and how that affects causal inference. These challenges lead to questions and confusion about how to conduct trials of complex interventions and interpret their findings…We discuss below that multiplicity is typically not a threat to the accuracy of inference from RCTs of complex interventions. The appropriate strategy to help minimise the problem of false positive findings is to declare and register outcomes in advance, use one measure per outcome and report on all outcomes irrespective of whether an intervention effect was found. Excessive limiting of the number of outcomes comes at an unacceptable cost as it restricts what can be learnt from the evaluation of a complex intervention by unnecessarily forcing researchers to not assess the intervention’s effect on outcomes important to decision makers and on outcomes along the impact paths….For trials of complex interventions, the distinction between primary and secondary outcomes as defined in current guidelines is not useful. In these trials, primary outcomes should be defined as those that are relevant based on the intervention intent and programme theory, declared (ie, registered), and adequately powered. Primary outcomes can include both intervention endpoints and intermediary outcomes. Confirmatory causal inference is limited to primary outcomes ... Secondary outcomes are all other outcomes” and on multiple testing corrections: “When a set of hypotheses (each about a different outcome) is tested simultaneously, the overall type I error rate, that is, the probability of wrongly rejecting at least one null hypothesis, increases. Simultaneous hypothesis testing assumes a universal null hypothesis that the intervention has no effect for all the outcomes investigated versus the alternative hypothesis of impact on at least one of these outcomes. Adjusting for multiplicity is only warranted if a set of hypotheses is tested simultaneously in this formal sense, but this test of a universal null vs the stated alternative hypothesis is nearly always irrelevant to the scientific or evaluation questions being investigated. When each hypothesis is limited to a single outcome, as is typically the case in impact evaluation, the probability of a false positive result remains the same irrespective of whether one or a million comparisons are tested”.

·       Paper submissions are now open for NEUDC 2022, due August 15 – conference will be at Yale Nov 5-6.

·       Funding call: PEDL major grant window is now open, due August 15. They fund research work on private enterprise, competition and markets in low-income countries, with this call having a particular focus on climate and resiliency.

·       Early job market deadline: The World Bank’s Young Professionals Program (YPP) is now accepting applications until July 15, and will re-open for the IFC only for August 15 – September 30.


David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000