Syndicate content

development impact links

Weekly links March 15: yes, research departments are needed; “after elections”, experiences with registered reports, and more...

David McKenzie's picture
  • Why the World Bank needs a research department: Penny Goldberg offers a strong rationale on Let’s Talk Development
  • On VoxDev, Battaglia, Gulesci and Madestam summarize their work on flexible credit contracts, which is one my favorite recent papers – they worked with BRAC in Bangladesh to offer borrowers a 12 month loan, with borrowers having the option to delay up to two monthly repayments at any time during the loan cycle. This appears to be a win-win, with the borrowers being more likely to grow their firms, and the bank experiencing lower default and higher client retention. However, although the post doesn’t discuss it, the product seemed less successful in helping larger SMEs.
  • Political business cycles in Africa – Rachel Strohm  notes a Quartz Africa story on a phenomenon that has held up a number of my impact evaluations – “Having contracts stalled and major projects abandoned is “very common”... The uncertainty is also magnified because newly-elected administrations could take months to form a cabinet and appoint heads of key agencies... as a bulk of voters travel to their ancestral homes to cast their ballot, businesses are forced to shutter or maintain skeletal operations... [this] has even made phrases like “after elections” a colloquial mainstay”.
  • The JDE interviews Eric Edmonds about his experience with the registered report process: “I thought I wrote really good pre-analysis plans and then I saw the template and realized, no, I write really bad pre-analysis plans too. I think just the act of providing that template to give some kind of standardization, is a great service to the profession... I think we need to be in a place where we have pre-analysis plans and we review them, and when we choose to deviate from them in our analysis, we're just able to be clear and to talk about why that is.” (h/t Ryan Edwards)

Weekly links March 1: the path from development economics to philanthropy, nitty-gritty of survey implementation, blame your manager for your low productivity, and more...

David McKenzie's picture

Weekly links Feb 22: alternatives to better willpower, CDFs for the win, the Brazilian solution to doubling Chinese consumption, and more...

David McKenzie's picture
  • A nice summary of the research on different strategies for reducing self-control failures by an all-star psychology/econ team of Duckworth, Milkman and Laibson in the open-access Psychological Science in the Public Interest journal. See in particular, Figure 2, which categorizes strategies by whether they need to be self-imposed vs can be imposed by others, and between approaches that  “modify one’s situation and approaches that modify one’s cognitions, depending on whether they target the objective situation or, in contrast, one’s mental representation of the environment”.  What is notable from reading this overview is how short-term many of the studies are, and how easy it is for best-intentions to get derailed – e.g. a study that “tested the benefits of temptation bundling...this study showed substantial initial increases in self-controlled decisions from allowing people to enjoy tempting audio novels only when exercising ... In Week 1 of the intervention, participants in the treatment group exercised 55% more than those in the control group. These benefits lasted for several weeks but ended when the gym closed over Thanksgiving.”
  • Related to the above, Alice Evans interviews Gautam Rao about behavioral development economics, with discussions of where he sees the big puzzles that behavioral economics helps us answer – e.g. why people don’t invest in high-return projects, and why demand for preventative health is not higher – and a nice discussion of the complementarity between insiders and outsiders in knowing what questions to ask.

Weekly links February 15: time to change your research production function? Hurray for big retailers and big data, but watch out for dynamic responses, and more....

David McKenzie's picture
  • This is the best thing I’ve read all week, particularly because it contrasts so much what my usual workflow looks like with what I would like more of it to look like – Cal Newport (of Deep Work fame) asks in the Chronicle Review “is email is making professors stupid?”. He notes that in the modern environment professors/researchers act more like middle managers than monks and suggests reforms to significantly restructure work culture to provide professors more uninterrupted time for thinking and teaching, and require less time on email and administrative duties. He gives the example of Donald Knuth, who does not have email and has an executive assistant who “intercepts all incoming communication, makes sense of it, brings to Knuth only what he needs to see, and does so only at ideal times for him to see it. His assistant also directly handles the administrative chores — things like scheduling meetings and filing expenses — that might otherwise add up to a major time sink for Knuth. It’s hard to overstate the benefits of this setup. Knuth is free to think hard about the most important and specialized aspects of his work, for hours at a time, disconnected from the background pull of inboxes”. It does make me think back to this old post I wrote on O-ring and Knowledge Hierarchy production functions for impact evaluations though, and the continued ability of O-ring issues to stymie my projects.

Now that I’ve noted that, here’s plenty of things to distract you from working deeper:

Weekly links February 8: some people still like knowledge, be passionate about it, you don’t always need to make policy recommendations, and more...

David McKenzie's picture
  • Rachel Glennerster on lessons from a year as DFID’s Chief Economist, including the importance of knowledge work “As countries get richer, helping them spend their own money more effectively will become a more important route to reducing poverty than the UK directly paying for services”
  • Seema Jayachandran and Ben Olken offer their thoughts on new exciting areas in development research and advice for young development researchers: “taking the time to actually immerse yourself in the environments that you are studying. That means going to the countries that you’re studying and making sure that you understand the environment firsthand” and “not over-strategize about what topics or methods have career returns at the expense of not working on what you are personally most excited about.”
  • A reminder that not all research has to make policy recommendations: There is a new World Bank report on the mobility of displaced Syrians, which looks at the voluntary return decisions of over 100,000 refugees to understand key factors influencing these decisions, combined with simulations of how different security scenarios might influence voluntary returns. But I particularly liked this in the Q&A about the report “What policy recommendations do emerge from this report? This report does not aim to design policies. It focuses on informing such policies by providing the necessary data, analysis, and framework that demonstrate the tradeoffs between various policy choices.”  
  • Fabrizio Zilibotti on how inequality shapes parenting styles – next time your kids complain you are being too strict, you can blame the economic environment.

Weekly links Feb 1: g big data, scaling up CCTs, “the data have been mined, of course”, and more...

David McKenzie's picture
  • Working with big datasets in Stata? Then the package gtools might be for you – I love that they have to give the caveat “Due to a Stata bug, gtools cannot support more than 2^31-1 (2.1 billion) observations”. Meanwhile, the Stata blog has the second post on doing power calculations via simulations in Stata.
  • More on industrial policy: A nice summary at VoxDev by Ernest Liu of his work on industrial policies in networks, and a reason to prioritize upstream sectors.
  • New SIEF note on using phone monitoring to help more money reach target beneficiaries: an example where small effects are meaningful when cheap and scaled to many people – the treatment group were only 1.3% more likely to get their money, but this meant about $1 million more funding reached farmers when officials knew they would be phone monitored, and the monitoring only cost $36,000.

Weekly links January 25: Doing SMS surveys, a Deaton classic re-released, upcoming conferences, coding tips, and more...

David McKenzie's picture
  • Recommendations for conducting SMS surveys from the Busara Center, who “sent a one-time mobile SMS survey to 3,489 Kenyans familiar with SMS surveys and to 6,279 not familiar. Each sample was randomized into one of 54 cross-cutting treatment combinations with variation across several dimensions: incentive amounts, pre-survey communication, survey lengths, and content variation” include keep mobile surveys to 5 questions or provide higher incentives; randomize questions and response options; and know that males and under-30s will be most likely to respond. Some useful benchmarks on survey response rates (only 36% overall, and 55% for those who have participated in past studies, vs only 18% for a sample of newer respondents); how much incentives help (moving from 0 to 100 KES ($1) increases response by 8% in the new respondent sample, but has no effect for past respondents).
  • Oxford’s CSAE has set up a new coder’s corner, where DPhil students will be posting weekly tips on coding that they have found useful.
  • VoxDev this week focuses on industrial policy – including Dani Rodrik starting the series off by giving an overview of where we currently stand in the literature: “the relevant question for industrial policy is not whether but how”
  • On Let’s Talk Development, Dave Evans notes that a 20-year re-issue of Angus Deaton’s famous “Analysis of Household Surveys” is now out (DOWNLOAD FOR FREE!!!!), with a new preface in which he reflects in trends over the last two decades – “I would be even more skeptical. As I taught the material over the years, it became clear that many of the uses of instrumental variables and natural experiments that had seemed so compelling at first lost a good deal of their luster with time.” – “Twenty years later, I now find myself very much more skeptical about instruments in almost any situation”.  I read this book cover-to-cover multiple times during my PhD and I highly recommend it.
  • Video of Chico Ferreira’s policy talk this week on Inequality as cholesterol : Attempting to quantify inequality of opportunity.
  • Conference calls for papers:
    • CEGA at Berkeley is holding a conference on lab experiments in developing countries, submissions due March 1.
    • Maryland is hosting the next BREAD conference. They invite submissions from interested researchers on any topic within the area of Development Economics. The deadline for submissions is February 18, 2019. Only full-length papers will be considered. Please send your paper to [email protected]
    • The World Bank’s ABCDE conference is on multilateralism/global public goods – submissions are due March 24.

Weekly links January 18: an example of the problem of ex-post power calcs, new tools for measuring behavior change, plan your surveys better, and more...

David McKenzie's picture
  • The Science of Behavior Change Repository offers a repository of measures of stress, personality, self-regulation, time preferences, etc. – with instruments for both children and adults, and information on how long the questions take to administer and where they have been validated.
  • Andrew Gelman on post-hoc power calculations – “my problem is that their recommended calculations will give wrong answers because they are based on extremely noisy estimates of effect size... Suppose you have 200 patients: 100 treated and 100 control, and post-operative survival is 94 for the treated group and 90 for the controls. Then the raw estimated treatment effect is 0.04 with standard error sqrt(0.94*0.06/100 + 0.90*0.10/100) = 0.04. The estimate is just one s.e. away from zero, hence not statistically significant. And the crudely estimated post-hoc power, using the normal distribution, is approximately 16% (the probability of observing an estimate at least 2 standard errors away from zero, conditional on the true parameter value being 1 standard error away from zero). But that’s a noisy, noisy estimate! Consider that effect sizes consistent with these data could be anywhere from -0.04 to +0.12 (roughly), hence absolute effect sizes could be roughly between 0 and 3 standard errors away from zero, corresponding to power being somewhere between 5% (if the true population effect size happened to be zero) and 97.5% (if the true effect size were three standard errors from zero).”
  • The World Bank’s data blog uses meta-data from hosting its survey solutions tool to ask how well people plan their surveys (and read the comments for good context in interpreting the data). Some key findings:
    • Surveys usually take longer than you think they will: 47% of users underestimated the amount of time they needed for the field work – and after requesting more server time, many then re-request this extension
    • Spend more time piloting questionnaires before launching: 80% of users revise their surveys at least once when surveying has started, and “a surprisingly high proportion of novice users made 10 or more revisions of their questionnaires during the fieldwork”
    • Another factoid of interest “An average nationally representative survey in developing countries costs about US$2M”
  • On the EDI Global blog, Nkolo, Mallet, and Terenzi draw on the experiences of EDI and the recent literature to discuss how to deal with surveys on sensitive topics.

Weekly links January 11: it’s not the experiment, it’s the policy; using evidence; clustering re-visited; and more...

David McKenzie's picture
  • “Experiments are not unpopular, unpopular policies are unpopular” – Mislavsky et al. on whether people object to companies running experiments. “Additionally, participants found experiments with deception (e.g., one shipping speed was promised, another was actually delivered), unequal outcomes (e.g., some participants get $5 for attending the gym, others get $10), and lack of consent, to be acceptable, as long as all conditions were themselves acceptable.” – caveat to note-  results are based on asking MTurk subjects (and one sample of university workers) whether they thought it was ok for companies to do this.
  • Doing power calculations via simulations in Stata – the Stata blog provides an introduction on how to do this.
  • Marc Bellemare has a post on how to use Pearl’s front-door criterion for identifying causal effects – he references this more comprehensive post by Alex Chino which provides some examples of its use in economics.

A few catch-up links

David McKenzie's picture
Our links are on break until the new year, but here are a couple of catch-up links now our job market series has finished:
  • BITSS had its annual conference (program and live video for the different talks posted online). Lots of discussion of the latest in transparency and open science. Includes a replication exercise with all AEJ applied papers: “69 of 162 eligible replication attempts successfuly replicated the article's analysis 42.6%.  A further 68 (42%) were at least partially successful.  A total of 98 out of 303 (32.3%) relied on confidential or proprietary data, and were thus not reproducible by this project.” And slides by Evers and Moore that should cause you to question any analysis done using Poissons or Negative Binomials.

Pages