Syndicate content

Seeking nimble plumbers

Alaka Holla's picture
Sometimes (maybe too many times), I come across an evaluation with middling or null results accompanied by a disclaimer that implementation didn’t go as planned and that results should be interpreted in that light. What can we learn from these evaluations? Would results have been better had implementation gone well? Or even if implementation had gone just fine, was the intervention the right solution for the problem? It’s hard to say, if we think of program success has a product of both implementation and a program that is right for the problem.

GiveDirectly Three-Year Impacts, Explained

Berk Ozler's picture

My post earlier this week on dissipating effects of cash transfers on adults in beneficiary households has caused not only a fair amount of disturbance in the development community, but also a decent amount of confusion about the three-year impacts of GiveDirectly’s cash transfers, from a working paper by Haushofer and Shapiro (2018) – HS (18) from hereon. At least some, including GiveDirectly itself and some academics, seem to think that one can reasonably interpret the findings in HS (18) to imply that the short-term effects of GD, also by Haushofer and Shapiro (2016) – HS (16) from hereon – were sustained three years post treatment. Below, I try to clear up the confusion regarding the evidence and explain why I vigorously disagree with that interpretation.

Weekly Links March 30: Academia vs policy, conflict on risk, child nutrition, and how to get the most out of an impact evaluation

David Evans's picture
  • Sylvain Chabé-Ferret from the Toulouse School of Economics takes stock in The Empirical Revolution in Economics: Taking Stock and Looking Ahead. He proposes 8 knowledge achievements of the empirical revolution in economics, 4 methodological advances, 3 major challenges, and 3 proposed solutions. 
  • Sue Dynarski from University of Michigan has a talk on "how to communicate with policymakers": "All communication is basically the same. Good communication is concise and it's to the point and it's concrete. And that's true for research writing... It's true for teaching... It's true if you're speaking to the public or to the media... It's just that people differ in how much they really have to listen to you." Policy makers don't have to listen to you. "Speaking in plain English is super important." She recommends Strunk and White's The Elements of Style. I do, too.  

  • Matthew Jukes from RTI proposes a "context-mechanisms-outcomes approach" to doing and reporting impact evaluations in his piece "Learning more from impact evaluations: Contexts, mechanisms and theories of literacy instruction interventions," in order to get the most out of evaluaitons, and he gives examples from a recent literacy intervention in Kenya.  

  • Over at the IFPRI blog, Tracy Brown reports on impact evaluations of "food-assisted maternal and child health and nutrition" programs in Guatemala and Burundi. In Burundi, "the largest impact on stunting was experienced by those who received food assistance throughout the entire period of the first 1,000 days, from conception to a child’s second birthday." (Blog 1, blog 2, with blog 3 coming soon here.) 

  • Alice Evans's 4 Questions podcast has featured several Development Impact-relevant stories in the last couple of weeks, including Pam Jakiela and Owen Ozier discussing "the impact of conflict on people's preferences" for risk, Michael Woolcock on the value of mixed methods in understanding "what works," and me talking about an impact evaluation to improve health care management in Nigeria as well as about the World Development Report on Education.  

  • At Oxford's conference at the Center for the Study of African Economies, DFID Chief Economist Rachel Glennerster -- who has worked extensively in policy and in academia – discussed the differences as she sees them, summarized below. You can watch her full talk here.  

How to Publish Statistically Insignificant Results in Economics

David Evans's picture

Sometimes, finding nothing at all can unlock the secrets of the universe. Consider this story from astronomy, recounted by Lily Zhao: “In 1823, Heinrich Wilhelm Olbers gazed up and wondered not about the stars, but about the darkness between them, asking why the sky is dark at night. If we assume a universe that is infinite, uniform and unchanging, then our line of sight should land on a star no matter where we look. For instance, imagine you are in a forest that stretches around you with no end. Then, in every direction you turn, you will eventually see a tree. Like trees in a never-ending forest, we should similarly be able to see stars in every direction, lighting up the night sky as bright as if were day. The fact that we don’t indicates that the universe either is not infinite, is not uniform, or is somehow changing.”

Weekly links March 23: recall revisited, Imbens critiques the Cartwright-Deaton RCT critiques, a new source for learning causal inference, and more...

David McKenzie's picture
  • The bias in recall data revisited: On the Ifpri blog, Susan Godlonton and co-authors discuss their work on “mental anchoring” - the tendency to rely too heavily on only one piece of information (the "anchor") when making a decision – they use panel data where they ask people about both current outcomes, and to recall outcomes from a year ago. They find that people use their current outcomes as an anchor in trying to recall what happened a year ago “a $10 increase reported in the 2013 concurrent report for monthly income was associated with a $7.50 increase in the recalled monthly income for 2012”
  • Scott Cunningham posts his “mixtape” on teaching causal inference -  a textbook that may be of particular interest to many of our readers because of its applied focus, use of Stata examples and Stata datasets, and also coverage of some topics not found in many of the alternatives (e.g. directed acyclical graphs, synthetic controls).

The latest research in economics on Africa: The CSAE round-up

Markus Goldstein's picture

This post was coauthored with Niklas Buehren, Joao Montalvao, Sreelakshmi Papineni, and Fei Yuan.   This team couldn’t attend all 106 sessions so coverage is limited.  If there is a paper you saw that you think people should know about please submit a comment. 

You can skim the full summary, or you can skip to one of the topics: Agriculture, conflict, credit, savings, risk and insurance, education, electricity access, firms, health and nutrition, households and networks, institutions, labor, political economy, poverty and inequality, and using evidence to inform policy.

The full program and links to most of the papers is available here

Gender Differences in What Development Economists Study

Seema Jayachandran's picture
Co-authored with Jamie Daubenspeck, a PhD student at Northwestern University

One of the arguments in favor of more gender diversity in the economics profession is that men and women bring distinct perspectives to research and are interested in answering different research questions. We focus in on development economics in this post and examine how the research topics studied by men and women differ. 

The State of Development Journals 2018: Quality, Acceptance Rates, Review Times, and Representation

David McKenzie's picture
Last year I published an inaugural “state of development journals” in which I put together information about different development journals that is not otherwise publicly available. Seeing as there seemed to be interest in this from readers and many of the editors, I thought I would do it again this year and see how much things have changed, as well as investigate a few more topics not covered last year.  Many thanks to the editors and editorial staff at different journals for the information they shared.
  1. Is this a good quality, high visibility journal to publish my work?

Since these were collected last year as well, I provide the impact factor of the journals. The standard impact factor is the mean number of citations in the last year of papers published in the journal in the past 2 years, while the 5-year is the mean number of cites in the last year of papers published in the last 5. This is complemented with RePec’s journal rankings which take into account article downloads and abstract views in addition to citations. The impact factors and RePec ranks are reasonably stable over the two years – with the World Bank Research Observer seeing the biggest jump in impact factor. It publishes the smallest number of articles, so the mean is more likely to be influenced by one or two papers.