· On VoxDev, Sofia Zárate, Catherine Rodríguez and Fabio Sánchez use staggered DiD to look at the medium and long-term impacts of a school feeding program in Colombia. “we find compelling evidence that PAE reduces both grade repetition and school dropout rates in primary and secondary education… School-fed students are 11 percentage points more likely to complete high school, as proxied by taking the Saber 11 exam, which corresponds to an increase of nearly 18% compared to non-beneficiaries given a baseline level for controls of 63%. … we also find that learning outcomes improve for beneficiary students. Using scores from the Saber 11 exam, which is comparable to the SAT in the United States, we observe that PAE participation boosts the academic performance of low-achieving students, while the impact on high-achieving students is negligible or even slightly negative. These positive outcomes translate into greater access to tertiary education…. he impact is concentrated in access to technical and technological institutions, with an additional 1.5 percentage points, or a 12% increase over non-PAE students (13%). However, the impact on access to universities is not statistically significant.”
· Also on VoxDev, Christopher Hoy, Filip Jolevski and Anthony Obeyesekere summarize their work looking at measuring tax evasion by firms in Indonesia by using double list experiments. “the results suggest that around 25% of formal firms evade taxes, and evasion is particularly high among firms that don’t export, find tax administration to be a burden, and compete with the informal sector.”
· In my “to think more about” pile, I’ve always been a bit suspicious of the interpretation of many machine learning heterogeneous treatment effects (HTEs), and it is hard to know how to benchmark them well. A paper by Leng and Dimmery forthcoming in Information Systems Research (ungated version) provides one way of looking at this that is unlikely to be possible in most development applications – just take a really massive experiment, where you can calculate difference-in-means unbiased effects for many subgroups, and then compare to the machine learning HTEs – they use online advertising experiments done on a sample of 25 million and “observe substantial discrepancies between machine learning-based treatment effect estimates and difference-in-means estimates directly from the randomized experiment.” They note that regularization trades off bias and variance, but can result in substantial bias in some cases. They then suggest a “calibration” approach to better align the two.
· Conference calls:
o the 22nd Midwest International Economic Development Conference (MWIEDC) will take place March 28-29, 2025 at the University of Illinois, Urbana-Champaign. Submissions are due January 8.
o The 2025 BITSS annual meeting on research transparency will be held at Berkeley on Feb 27. Submissions due November 17.
o the 8th Workshop on Subjective Expectations, to be held at Nova School of Business and Economics, Lisbon on June 9-10, 2025. Submissions due Jan 26.
· Reminders:
o DECRG is hiring on the job market
Our call for blog your job market paper is now open
Join the Conversation