As a change from my usual posts, I thought I’d note five small things I’ve learned recently, mostly to do with Stata, with the hope that they might help others, or at least jog my memory when I unlearn them again soon.
1.Stata’s random number generator has a limit on the seed that you can set of 2,147,483,647.
Why did I learn this? We were doing a live random assignment for an impact evaluation I am starting in Colombia. We had programmed up the code, and tested it several times, with it working fine. In our test code, we had set the seed for random number generation as the date “04112018”. Then when my collaborator went to run this live, it was decided to also add the time of the drawing at the end, so that the seed was set as “041120180304”. This generated an error, and prevented the code from running. Luckily we could quickly fix it, and the live draw proceeded ok. But lesson learned, 2^31-1 is a large number, but sometimes binds.
- NBER Summer institute development economics program and labor studies program.
- The map of “Manuscript-Earth” featuring “The pit of you saved those files, right? Right?”, “confused about the big picture woods”, “The island of misfit results” and other glorious landmarks (h/t Dave Evans).
- Do you say “no” enough to new projects? Anton Pottegard has a nice poster of 8 practical tools to assist in saying no – including JOMO (joy of missing out) – “once a project is turned down, set time aside to actively ponder about how happy you are not to be doing it” (h/t Scott Cunningham).
Teachers and Teaching
- How much scripting is too much scripting? Piper et al. review the evidence and find that “structured teachers' guides improve learning outcomes, but that overly scripted teachers' guides are somewhat less effective than simplified teachers' guides that give specific guidance to the teacher but are not written word for word for each lesson in the guide.”
- Teachers in Uganda tend to believe they are better than most other teachers in terms of ability and effort. This is especially true for low-effort teachers (Sabarwal, Kacker, and Habyarimana).
- Across 328 studies with nearly 4,000 effects, Direct Instruction performed really well: “All of the estimated effects were positive and all [with cognitive outcomes] were statistically significant” (Stockard et al.) What’s direct instruction? Think scripted lessons PLUS.
- A small study of 36 teachers in China showed that teachers “scored high on classroom organization, but lower on emotional support and instructional support.” Also, teachers who believe students should be at the center do better. (Coflan et al.)
- A large, unconditional increase in teacher salaries in Indonesia had no impact on student performance (de Ree et al.). This paper has been around (here’s my blog post on it), but it’s just now been published.
- Training teachers in a low-cost, highly scripted teaching method led to big gains in Papua New Guinea (Macdonald and Vu).
- Having subject-specific teachers in primary school may actually lead to less learning and lower student attendance. Evidence from the USA (Fryer) (My blog about it.)
This is a guest post by Bruce Wydick.
It isn’t hard to understand why Andrew Leigh would write a book on randomized controlled trials. A kind of modern renaissance man, Leigh currently serves as a member of the Australian House of Representatives. But in his prior life as an economist (Ph.D. from Harvard’s Kennedy School), Leigh published widely in the fields of public finance, labor, health, and political economy, even winning the Economic Society of Australia's Young Economist Award--a kind of John Bates Clark medal for Australians. His evolution from economist to politician must constantly evoke the following question: What is the best research approach for informing practical policy? In his new book, Leigh leaves little doubt about his answer. Randomistas: How Radical Researchers Changed Our World (forthcoming, Yale University Press) heralds the widespread incorporation of the randomized controlled trial (RCT) into the mainstream of social science.
- Book reviews
- On VoxDev: 1) how ethnic patronage determines rents and investments in Kenyan slums; 2) speeding up court pre-trials in Senegal without reducing decision quality. and 3) a video with John Sutton on how to attract FDI and generate jobs in Africa – he is a big fan of industrial parks.
- HBS Afterhours podcast has an interview with Rafaella Sadun discussing her work on the world management surveys, including discussing measurement issues and why basic management is underrated relative to grand strategy.
- 80,000 hours podcast with Eva Vivalt – with a discussion of the Y-Combinator basic income experiment starting up – and whether you should release results in the interim – as well as on how to generalize from evidence, and on collecting priors about projects.
A couple of years ago, an influential paper in Science by Banerjee and coauthors looked at the impact of poverty graduation programs across 6 countries. At the time (and probably since) this was the largest effort to look at the same(ish) intervention in multiple contexts at once – arguably solving the replication problem and proving external validity in one fell swoop.
One of the standard defenses of an RCT proposal to a skeptic is to invoke budget and implementation capacity constraints and argue that since not everyone will get the desired treatment (at least initially), the fairest way would be to randomly allocate treatment among the target population. While this is true, it is also possible to take into consideration the maximization of participants’ welfare and incorporate their preferences and expected responses to treatment into account while designing an RCT that still satisfies the aims of the researcher (identify unbiased treatment effects with sufficient precision). A recent paper by Yusuke Narita seems to make significant headway in this direction for development economists to take notice.
- Tim Bartik offers a detailed response in the comments to my recent blog post on Bartik/shift-share instruments.
- a new release of ietoolkit is up on SSC. Type [ssc install ietoolkit, replace] on Stata to update. New features include: options to display normalized differences and an F-test for joint orthogonality in iebaltab and additions to iegitaddmd and iematch.
- The econthatmatters blog has a recap of different sessions at the Midwest International Economic Development conference.
- Using photos and machine learning to cheaply measure height of people in surveys – a nice data blog
- Rachel Strom shares her experiences as a graduate student with depression – which hopefully can help others experiencing similar issues.
Many education investments focus on the first years of primary education or – even before that – early child education. The logic behind this is intuitive: Without a solid foundation, it’s hard for children and youth to gain later skills that use those foundations. If you can’t decipher letters, then it’s going to be tough to learn from a science textbook. Or even a math textbook. But it’s important to remember that for most “investors” (whether governments or parents or the children themselves), the most basic skills aren’t the ultimate goal. The objective is better life outcomes. Most of the justification for these early interventions are that they will translate into better lives once these children grow up.
DI: Please provide a short paragraph describing what you do in this job, and give us a sense of what a typical day or week might look like for you. My job is to conduct independent rigorous impact and performance evaluations of social programs in developing countries. Most of this work is conducted under contract to US government agencies (mostly MCC and USAID) and various foundations, who issue requests for proposals to evaluate their programs. In my eight years at Mathematica I’ve worked on evaluations in Asia, Africa, and Eastern Europe, and in topic areas including agriculture, primary education, vocational training, maternal and child health, land, and others. As senior researcher on an evaluation team I’m typically responsible for technical leadership of all aspects of an evaluation, including study design, data collection, and final analysis and reporting. Last week was fairly typical and included work on designing a randomized controlled trial of an anti-child labor program, drafting a quantitative survey of vocational education students, and planning the analysis of survey data from farmers in Morocco.
- Excellent tradetalks podcast with Dave Donaldson has a detailed discussion with him on his work looking at the impact of railroads on development in India and in U.S. economic history.
The latest Journal of Economic Perspectives includes:
- Acemoglu provides a summary of Donaldson’s work that led to him receiving the Bates Clark medal
- Several papers on risk preferences, including discussion of whether risk preferences are stable and how to think about them if they are not (interesting sidenote in this is a comment on how much measurement error there is when using incentivized lotteries – the correlations between risk premia measured for the same individual using different experimental choices can be quite low, and correlations tend to be higher for survey measures – and speculation that the measurement error may be worse in developing countries “large share of the papers that document contradictory effects of violent conflict or natural disasters use experimental data from developing countries, but these tools were typically developed in the context of high-income countries. They may be more likely to produce noisy results in samples that are less educated, partly illiterate, or less used to abstract thinking)
- a series of papers on how much the U.S. gains from trade