- On project syndicate – “Shouldn’t economists ask themselves whether it is morally justifiable to provide even strictly technical advice to self-dealing, corrupt, or undemocratic governments?”
- On Let’s Talk Development, Dan Rogger summarizes some of the latest systems research on the quality of governance; and Bilal Zia summarizes his new paper on how business aspirations are correlated with better small firm outcomes in the cross-section and short panel.
- VoxEU has a new “blogs & reviews” feature – they note that “Very few of the old-style ‘clip and comment’ blogs are still active. On the short and furious end, they have been squeezed by Twitter. On the long, serious end, they have been squeezed by Vox columns and the many Vox-like website that post ‘blogs’..... my idea in launching this new feature is to encourage a much wider range of economists to get into the business of commenting on public policy issues based on their general, research-based knowledge and experience. The essays on this page are far more ‘free form’ than Vox columns. They can be shorter or longer, more technical or more informal.”.
As a change from my usual posts, I thought I’d note five small things I’ve learned recently, mostly to do with Stata, with the hope that they might help others, or at least jog my memory when I unlearn them again soon.
1.Stata’s random number generator has a limit on the seed that you can set of 2,147,483,647.
Why did I learn this? We were doing a live random assignment for an impact evaluation I am starting in Colombia. We had programmed up the code, and tested it several times, with it working fine. In our test code, we had set the seed for random number generation as the date “04112018”. Then when my collaborator went to run this live, it was decided to also add the time of the drawing at the end, so that the seed was set as “041120180304”. This generated an error, and prevented the code from running. Luckily we could quickly fix it, and the live draw proceeded ok. But lesson learned, 2^31-1 is a large number, but sometimes binds.
- NBER Summer institute development economics program and labor studies program.
- The map of “Manuscript-Earth” featuring “The pit of you saved those files, right? Right?”, “confused about the big picture woods”, “The island of misfit results” and other glorious landmarks (h/t Dave Evans).
- Do you say “no” enough to new projects? Anton Pottegard has a nice poster of 8 practical tools to assist in saying no – including JOMO (joy of missing out) – “once a project is turned down, set time aside to actively ponder about how happy you are not to be doing it” (h/t Scott Cunningham).
Teachers and Teaching
- How much scripting is too much scripting? Piper et al. review the evidence and find that “structured teachers' guides improve learning outcomes, but that overly scripted teachers' guides are somewhat less effective than simplified teachers' guides that give specific guidance to the teacher but are not written word for word for each lesson in the guide.”
- Teachers in Uganda tend to believe they are better than most other teachers in terms of ability and effort. This is especially true for low-effort teachers (Sabarwal, Kacker, and Habyarimana).
- Across 328 studies with nearly 4,000 effects, Direct Instruction performed really well: “All of the estimated effects were positive and all [with cognitive outcomes] were statistically significant” (Stockard et al.) What’s direct instruction? Think scripted lessons PLUS.
- A small study of 36 teachers in China showed that teachers “scored high on classroom organization, but lower on emotional support and instructional support.” Also, teachers who believe students should be at the center do better. (Coflan et al.)
- A large, unconditional increase in teacher salaries in Indonesia had no impact on student performance (de Ree et al.). This paper has been around (here’s my blog post on it), but it’s just now been published.
- Training teachers in a low-cost, highly scripted teaching method led to big gains in Papua New Guinea (Macdonald and Vu).
- Having subject-specific teachers in primary school may actually lead to less learning and lower student attendance. Evidence from the USA (Fryer) (My blog about it.)
This is a guest post by Bruce Wydick.
It isn’t hard to understand why Andrew Leigh would write a book on randomized controlled trials. A kind of modern renaissance man, Leigh currently serves as a member of the Australian House of Representatives. But in his prior life as an economist (Ph.D. from Harvard’s Kennedy School), Leigh published widely in the fields of public finance, labor, health, and political economy, even winning the Economic Society of Australia's Young Economist Award--a kind of John Bates Clark medal for Australians. His evolution from economist to politician must constantly evoke the following question: What is the best research approach for informing practical policy? In his new book, Leigh leaves little doubt about his answer. Randomistas: How Radical Researchers Changed Our World (forthcoming, Yale University Press) heralds the widespread incorporation of the randomized controlled trial (RCT) into the mainstream of social science.
- Book reviews
- On VoxDev: 1) how ethnic patronage determines rents and investments in Kenyan slums; 2) speeding up court pre-trials in Senegal without reducing decision quality. and 3) a video with John Sutton on how to attract FDI and generate jobs in Africa – he is a big fan of industrial parks.
- HBS Afterhours podcast has an interview with Rafaella Sadun discussing her work on the world management surveys, including discussing measurement issues and why basic management is underrated relative to grand strategy.
- 80,000 hours podcast with Eva Vivalt – with a discussion of the Y-Combinator basic income experiment starting up – and whether you should release results in the interim – as well as on how to generalize from evidence, and on collecting priors about projects.
A couple of years ago, an influential paper in Science by Banerjee and coauthors looked at the impact of poverty graduation programs across 6 countries. At the time (and probably since) this was the largest effort to look at the same(ish) intervention in multiple contexts at once – arguably solving the replication problem and proving external validity in one fell swoop.
One of the standard defenses of an RCT proposal to a skeptic is to invoke budget and implementation capacity constraints and argue that since not everyone will get the desired treatment (at least initially), the fairest way would be to randomly allocate treatment among the target population. While this is true, it is also possible to take into consideration the maximization of participants’ welfare and incorporate their preferences and expected responses to treatment into account while designing an RCT that still satisfies the aims of the researcher (identify unbiased treatment effects with sufficient precision). A recent paper by Yusuke Narita seems to make significant headway in this direction for development economists to take notice.