Syndicate content

development impact links

Weekly links February 15: time to change your research production function? Hurray for big retailers and big data, but watch out for dynamic responses, and more....

David McKenzie's picture
  • This is the best thing I’ve read all week, particularly because it contrasts so much what my usual workflow looks like with what I would like more of it to look like – Cal Newport (of Deep Work fame) asks in the Chronicle Review “is email is making professors stupid?”. He notes that in the modern environment professors/researchers act more like middle managers than monks and suggests reforms to significantly restructure work culture to provide professors more uninterrupted time for thinking and teaching, and require less time on email and administrative duties. He gives the example of Donald Knuth, who does not have email and has an executive assistant who “intercepts all incoming communication, makes sense of it, brings to Knuth only what he needs to see, and does so only at ideal times for him to see it. His assistant also directly handles the administrative chores — things like scheduling meetings and filing expenses — that might otherwise add up to a major time sink for Knuth. It’s hard to overstate the benefits of this setup. Knuth is free to think hard about the most important and specialized aspects of his work, for hours at a time, disconnected from the background pull of inboxes”. It does make me think back to this old post I wrote on O-ring and Knowledge Hierarchy production functions for impact evaluations though, and the continued ability of O-ring issues to stymie my projects.

Now that I’ve noted that, here’s plenty of things to distract you from working deeper:

Weekly links February 8: some people still like knowledge, be passionate about it, you don’t always need to make policy recommendations, and more...

David McKenzie's picture
  • Rachel Glennerster on lessons from a year as DFID’s Chief Economist, including the importance of knowledge work “As countries get richer, helping them spend their own money more effectively will become a more important route to reducing poverty than the UK directly paying for services”
  • Seema Jayachandran and Ben Olken offer their thoughts on new exciting areas in development research and advice for young development researchers: “taking the time to actually immerse yourself in the environments that you are studying. That means going to the countries that you’re studying and making sure that you understand the environment firsthand” and “not over-strategize about what topics or methods have career returns at the expense of not working on what you are personally most excited about.”
  • A reminder that not all research has to make policy recommendations: There is a new World Bank report on the mobility of displaced Syrians, which looks at the voluntary return decisions of over 100,000 refugees to understand key factors influencing these decisions, combined with simulations of how different security scenarios might influence voluntary returns. But I particularly liked this in the Q&A about the report “What policy recommendations do emerge from this report? This report does not aim to design policies. It focuses on informing such policies by providing the necessary data, analysis, and framework that demonstrate the tradeoffs between various policy choices.”  
  • Fabrizio Zilibotti on how inequality shapes parenting styles – next time your kids complain you are being too strict, you can blame the economic environment.

Weekly links Feb 1: g big data, scaling up CCTs, “the data have been mined, of course”, and more...

David McKenzie's picture
  • Working with big datasets in Stata? Then the package gtools might be for you – I love that they have to give the caveat “Due to a Stata bug, gtools cannot support more than 2^31-1 (2.1 billion) observations”. Meanwhile, the Stata blog has the second post on doing power calculations via simulations in Stata.
  • More on industrial policy: A nice summary at VoxDev by Ernest Liu of his work on industrial policies in networks, and a reason to prioritize upstream sectors.
  • New SIEF note on using phone monitoring to help more money reach target beneficiaries: an example where small effects are meaningful when cheap and scaled to many people – the treatment group were only 1.3% more likely to get their money, but this meant about $1 million more funding reached farmers when officials knew they would be phone monitored, and the monitoring only cost $36,000.

Weekly links January 25: Doing SMS surveys, a Deaton classic re-released, upcoming conferences, coding tips, and more...

David McKenzie's picture
  • Recommendations for conducting SMS surveys from the Busara Center, who “sent a one-time mobile SMS survey to 3,489 Kenyans familiar with SMS surveys and to 6,279 not familiar. Each sample was randomized into one of 54 cross-cutting treatment combinations with variation across several dimensions: incentive amounts, pre-survey communication, survey lengths, and content variation” include keep mobile surveys to 5 questions or provide higher incentives; randomize questions and response options; and know that males and under-30s will be most likely to respond. Some useful benchmarks on survey response rates (only 36% overall, and 55% for those who have participated in past studies, vs only 18% for a sample of newer respondents); how much incentives help (moving from 0 to 100 KES ($1) increases response by 8% in the new respondent sample, but has no effect for past respondents).
  • Oxford’s CSAE has set up a new coder’s corner, where DPhil students will be posting weekly tips on coding that they have found useful.
  • VoxDev this week focuses on industrial policy – including Dani Rodrik starting the series off by giving an overview of where we currently stand in the literature: “the relevant question for industrial policy is not whether but how”
  • On Let’s Talk Development, Dave Evans notes that a 20-year re-issue of Angus Deaton’s famous “Analysis of Household Surveys” is now out (DOWNLOAD FOR FREE!!!!), with a new preface in which he reflects in trends over the last two decades – “I would be even more skeptical. As I taught the material over the years, it became clear that many of the uses of instrumental variables and natural experiments that had seemed so compelling at first lost a good deal of their luster with time.” – “Twenty years later, I now find myself very much more skeptical about instruments in almost any situation”.  I read this book cover-to-cover multiple times during my PhD and I highly recommend it.
  • Video of Chico Ferreira’s policy talk this week on Inequality as cholesterol : Attempting to quantify inequality of opportunity.
  • Conference calls for papers:
    • CEGA at Berkeley is holding a conference on lab experiments in developing countries, submissions due March 1.
    • Maryland is hosting the next BREAD conference. They invite submissions from interested researchers on any topic within the area of Development Economics. The deadline for submissions is February 18, 2019. Only full-length papers will be considered. Please send your paper to [email protected]
    • The World Bank’s ABCDE conference is on multilateralism/global public goods – submissions are due March 24.

Weekly links January 18: an example of the problem of ex-post power calcs, new tools for measuring behavior change, plan your surveys better, and more...

David McKenzie's picture
  • The Science of Behavior Change Repository offers a repository of measures of stress, personality, self-regulation, time preferences, etc. – with instruments for both children and adults, and information on how long the questions take to administer and where they have been validated.
  • Andrew Gelman on post-hoc power calculations – “my problem is that their recommended calculations will give wrong answers because they are based on extremely noisy estimates of effect size... Suppose you have 200 patients: 100 treated and 100 control, and post-operative survival is 94 for the treated group and 90 for the controls. Then the raw estimated treatment effect is 0.04 with standard error sqrt(0.94*0.06/100 + 0.90*0.10/100) = 0.04. The estimate is just one s.e. away from zero, hence not statistically significant. And the crudely estimated post-hoc power, using the normal distribution, is approximately 16% (the probability of observing an estimate at least 2 standard errors away from zero, conditional on the true parameter value being 1 standard error away from zero). But that’s a noisy, noisy estimate! Consider that effect sizes consistent with these data could be anywhere from -0.04 to +0.12 (roughly), hence absolute effect sizes could be roughly between 0 and 3 standard errors away from zero, corresponding to power being somewhere between 5% (if the true population effect size happened to be zero) and 97.5% (if the true effect size were three standard errors from zero).”
  • The World Bank’s data blog uses meta-data from hosting its survey solutions tool to ask how well people plan their surveys (and read the comments for good context in interpreting the data). Some key findings:
    • Surveys usually take longer than you think they will: 47% of users underestimated the amount of time they needed for the field work – and after requesting more server time, many then re-request this extension
    • Spend more time piloting questionnaires before launching: 80% of users revise their surveys at least once when surveying has started, and “a surprisingly high proportion of novice users made 10 or more revisions of their questionnaires during the fieldwork”
    • Another factoid of interest “An average nationally representative survey in developing countries costs about US$2M”
  • On the EDI Global blog, Nkolo, Mallet, and Terenzi draw on the experiences of EDI and the recent literature to discuss how to deal with surveys on sensitive topics.

Weekly links January 11: it’s not the experiment, it’s the policy; using evidence; clustering re-visited; and more...

David McKenzie's picture
  • “Experiments are not unpopular, unpopular policies are unpopular” – Mislavsky et al. on whether people object to companies running experiments. “Additionally, participants found experiments with deception (e.g., one shipping speed was promised, another was actually delivered), unequal outcomes (e.g., some participants get $5 for attending the gym, others get $10), and lack of consent, to be acceptable, as long as all conditions were themselves acceptable.” – caveat to note-  results are based on asking MTurk subjects (and one sample of university workers) whether they thought it was ok for companies to do this.
  • Doing power calculations via simulations in Stata – the Stata blog provides an introduction on how to do this.
  • Marc Bellemare has a post on how to use Pearl’s front-door criterion for identifying causal effects – he references this more comprehensive post by Alex Chino which provides some examples of its use in economics.

A few catch-up links

David McKenzie's picture
Our links are on break until the new year, but here are a couple of catch-up links now our job market series has finished:
  • BITSS had its annual conference (program and live video for the different talks posted online). Lots of discussion of the latest in transparency and open science. Includes a replication exercise with all AEJ applied papers: “69 of 162 eligible replication attempts successfuly replicated the article's analysis 42.6%.  A further 68 (42%) were at least partially successful.  A total of 98 out of 303 (32.3%) relied on confidential or proprietary data, and were thus not reproducible by this project.” And slides by Evers and Moore that should cause you to question any analysis done using Poissons or Negative Binomials.

Weekly links November 16: Remembering TN, targeting vs universal transfers debates, farcical robustness checks, bad replication techniques, and more...

David McKenzie's picture

Weekly links November 9: a doppelganger U.K., conditional distributions of journal decision times, invisible infrastructure, and more...

David McKenzie's picture
  • The Wall Street Journal discusses the synthetic control method as a way to understand Brexit (gated): “There are small differences in the various studies, but they all use Prof. Abadie’s method as the basis for constructing a “doppelganger” U.K. from other similar advanced economies, such as the U.S., Canada, France and the Netherlands. They reach similar conclusions, suggesting the British economy at the start of 2018 was around 2% smaller than it would have been had the 2016 referendum gone the other way”
  • Market-level experimentation: In the Harvard Business Review, How Uber used synthetic control methods combined with experiments to decide whether to launch Express Pool.

Weekly links November 2: harnessing shame, measuring markets, African safety nets and apprenticeships, rugby, and more...

David McKenzie's picture
  • “The average number of new social safety net programs launched each year in African countries since 2010 exceeded 10” – Kathleen Beegle on the Africa Can End Poverty blog discusses the rise of social safety nets in Africa.
  • The Declare Design team remind you to stratify your cluster-randomized experiments by cluster size.
  • With the job market coming up, a paper on the characteristics of “job market stars” – one factoid is that in development more than half the stars are female, compared to only 20% of all stars...another is that “not a single star student for six years running has taken a permanent job in industry”.
  • On VoxDev, Gordon Hanson and Amit Khandelwal discuss using night-light intensity to measure markets- with a comparison to what daytime satellite imagery reveals, and a note that combining the two provides the best results – “daytime imagery is particularly well-suited for defining the extent of market areas, and that nightlight imagery is useful for capturing the intensity of activity within these market boundaries”

Pages