- Bad news for those who have relied on NIH for doing human subjects training. They have stopped offering free training, and are currently designing a new training that will cost $40 and be available November 6. Twitter is recommending either the Global Health Training Centre free online course or the FHI Research Ethics Training Curriculum as possible alternatives if you need something now, or want something that is free.
- Andrew Gelman offers a nice reminder of why asking people to do post-hoc power calculations based on estimated effect sizes is a bad idea.
- This week the Declare Design team ask whether blocking (stratifying) can actually increase your standard errors? Answer is yes, but hard. See also my AEJ applied paper with Miriam Bruhn on this.
- On the Econ that Matters blog, Chris Barrett and John Hoddinott look at the state of development economics, as seen from over 600 NEUDC submissions: “the quality of the work is remarkably high”, “suggestive evidence of an evolution in the field away from certain topics. There were virtually no “pure theory” papers, although the best papers often contained a short theoretical or conceptual model to motivate the empirical work. There were few submissions in macroeconomics... surprisingly little on trade ...Despite the profound long-term effects of climate change on developing countries, we received relatively few submissions on this topic” – also most work is on Sub-Saharan Africa and South Asia, and very few papers use IV or matching.
Many researchers hope that their research will have some impact on policy. Research can impact policy directly: A policymaker uses the results of your study in making a policy decision. For direct policy impact, policymakers – or the people who advise them or the people who vote for them – have to know about your work. Research can also impact policy indirectly: Your research becomes part of a body of evidence which collectively affects future policy decisions. For indirect policy impact, other researchers have to know about your work. It is unlikely that your research will impact policy either directly or indirectly if no one knows about it.
Over the years, I’ve experimented with many ways of increasing consumption of research (together with colleagues and co-authors), and I’ve seen many other ways. Here is a menu of ten options. The point isn’t to do all of these, but rather to select those that will help you reach the audience you most want to impact.
- dissemination of results
Just about every article or report on education that we read these days – and some that we’ve written – bemoan the quality of education in low- and middle-income countries. The World Bank’s World Development Report 2018 devoted an entire, well-documented chapter to “the many faces of the learning crisis.” Recent reports on education in Latin America and in Africa make the same point.
But within low- and middle-income countries, not all education is created equal, and not all students face the same challenges. As Aaron Benavot highlights, “policies found to be effective in addressing the challenges facing ‘average’ or typical learners” will not necessarily be effective in addressing those “faced by learners from marginalized groups.”
Indeed, we know that within a given classroom, there can be massive variation in learning across students. As you can see in the figure below, from a group of students in New Delhi, India, in a 9th grade class you have students reading at the 8th grade level and at the 6th grade level. For math, they’re performing at the 3rd grade level and the 5th grade level. So if an intervention increases average performance, are we helping those students who were already ahead or those who are furthest behind? (In this case, no one’s really ahead, since even the top performers are way behind grade level. But the students in the bottom 25th percentile are doubly disadvantaged – behind in learning in a low-performing school system.)
Source: World Development Report 2018, using data from Muralidharan, Singh, and Ganimian (2017).
- Merit-based or needs-based scholarships? Over at Let’s Talk Development, Dave Evans discusses the power of labels for school scholarships, based on 9-year results in Cambodia by Barrera, de Barros, and Filmer.
- The declare design team has another blog, this time illustrating designing for spillovers, and how you have to be careful about what estimand you are after.
- Are regular paper abstracts too wordy? Dan Rogger tries a comic strip abstract for his recent working paper on politics, bureaucracy, and infrastructure in Nigeria and provides examples of longer form graphic representations of research papers.
- On the Brookings Future Development blog, what a decade of panel data tells us about toilet use in India – “while almost all the new latrines in 2006 were still in use in 2010, many had been abandoned by 2016” – unfortunately the effects of community-led total sanitation were not lasting.
Four years ago, Markus looked at 20 impact evaluations and wrote a post concluding that most of them didn’t have much to say about reducing poverty (where was poverty was defined as expenditure, income, and/or wealth). This summer Shanta Devarajan asked for an update on twitter, so here it is.
My colleague Bilal Zia recently released a working paper (joint with Emmanuel Hakizimfura and Douglas Randall) that reports on an experiment conducted with 200 Savings and Credit Cooperative Associations (SACCOs) in Rwanda. The experiment aimed to test two different approaches to decentralizing financial education delivery, and finds improvements are greater when Saccos get to choose which staff should be trained rather than when they are told to send the manager, a loan officer, and a board member.
One point of the paper that I thought might be of broader interest to our readers concerns the issue of what to do when you only have enough budget to survey a sample of a program’s beneficiaries, and you are concerned about getting enough compliers.
This is a guest post by Craig McIntosh and Andrew Zeitlin.
We are grateful to have this chance to speak about our experiences with USAID's pilot of benchmarking its traditional development assistance using unconditional cash transfers. Along with the companion benchmarking study that is still in the field (that one comparing a youth workforce readiness to cash) we have spent the past two and a half years working to design these head-to-head studies, and are glad to have a chance to reflect on the process. These are complex studies with many stakeholders and lots of collective agreements over communications, and our report to USAID, released yesterday, reflects that. Here, we convey our personal impressions as researchers involved in the studies.
- The NYTimes describes ongoing USAID attempts to benchmark some of its programs against cash: “The initiative has operated in stealth mode, in part because of fears that the idea of giving tax dollars to poor Africans might provoke objections from Congress or the White House. The project also poses a threat to hundreds of for-profit companies and nonprofit groups that secure U.S.A.I.D. contracts, often with scant evidence of impact”. Quartz Africa has a follow-up piece which has results from the first of these trials, comparing a standard USAID water and sanitation program in Rwanda to just giving cash. They find the WASH program doesn’t have much impact, cash-equivalent amounts do slightly more, and a larger cash grant has wide-ranging impacts. “You should be tipping the scale to doing more for fewer people... water, sanitation, and hygiene (WASH) programs could devote less of their resources to education and behavior change, and more to direct support”. Here’s another good summary at Vox.
- Following on Wednesday’s blog about the declare design platform, their team have now set up a blog to give examples of use. The first post looks at the question of whether a non-experimental study should control for a pre-treatment variable that is correlated with both treatment and the outcome.
- Dylan Matthews at Vox has a balanced look at the Blattman et al. 9-year follow-up that Berk blogged on earlier this week.
- The UpJohn Institute’s Employment Research newsletter has a summary of a conference on job search and vacancies, focusing on insights from firm-level data on job postings. Interesting descriptives about how online job posting markets work on several platforms, including one in China.
This is a guest post by Graeme Blair, Jasper Cooper, Alex Coppock, and Macartan Humphreys
Empirical social scientists spend a lot of time trying to develop really good research designs and then trying to convince readers and reviewers that their designs really are good. We think the challenges of generating and communicating designs are made harder than they need to be because (a) there is not a common understanding of what constitutes a design and (b) there is a dearth of tools for analyzing the properties of a design.