Syndicate content

David McKenzie's blog

It’s that Time of the Year: Submissions now open for our annual “Blog your job market paper” series

David McKenzie's picture

We are pleased to launch for the sixth year a call for PhD students on the job market to blog their job market paper on the Development Impact blog.  We welcome blog posts on anything related to empirical development work, impact evaluation, or measurement. For examples, you can see posts from 2015, 20142013 and 2012. We will follow a similar process as previous years, which is as follows:

We will start accepting submissions immediately until midnight EST on Tuesday, November 22, with the goal of publishing a couple before Thanksgiving and then about 6-8 more in December when people are deciding who to interview. We will not accept any submissions after the deadline (no exceptions). As with last year, we will do some refereeing to decide which to include on the basis of interest, how well written they are, and fit with the blog. Your chances of being accepted are likely to be somewhat higher if you submit earlier rather than waiting until the absolute last minute of our deadline.

Lessons from some of my evaluation failures: Part 1 of ?

David McKenzie's picture

We’ve yet to receive much in the way of submissions to our learning from failure series, so I thought I’d share some of my trials and tribulations, and what I’ve learnt along the way. Some of this comes back to how much you need to sweat the small stuff versus delegate and preserve your time for bigger picture thinking (which I discussed in this post on whether IE is O-ring or knowledge hierarchy production). But this presumes you have a choice on what you do yourself, when often in dealing with governments and multiple layers of bureaucracy, the problem is your potential for micro-management can be less in the first place. Here are a few, and I can share more in other posts.

Weekly links October 21: Deaton on doing research, flexible work schedules, 17,000 minimum wage changes, and more…

David McKenzie's picture
  • Three questions with Angus Deaton – why diversity in researchers is good and directed research can be bad “Everyone of us has a different upbringing. Many people in economics now come from many countries around the world, they have different political views and political backgrounds. There’s a whole different social culture, and so on. I think economics in the United States has changed immeasurably in the last 30 years and been enormously enriched by that diversity with people coming from all over the world. That will only work if people bring with them the stuff they had when they were children or the stuff they did in college, the passions they had early on. Either smash them to pieces in the face of the data and see your professors like me telling them to do something else or turn them into something really valuable. So, don’t lose your unique value contributions. Stick to what is really important to you and try to research that” (h/t Berk).
  • Chris Blattman on the hidden price of risky research, particularly for women.

Weekly links October 14: fake doctors and dubious health claims, real profits, refugees, reweighting, and more…

David McKenzie's picture
  • In Science this week: what refugees do Europeans want? A “conjoint experiment” with 180,000 Europeans finds they want high-skilled, young, fluent in the local language, who are persecuted and non-Muslim (5 page paper, 121 page appendix!). This involved showing pairs of refugees with randomly assigned characteristics and having them say whether they supported admitting the refugee, and if they could only choose one out of the pair, which one.
  • BBC News covers the recent science paper by Jishnu Das and co-authors on training ‘fake doctors’ in India (or for more study details, see the MIT press release which has a great photo-bomb)

Book Review: Failing in the Field – Karlan and Appel on what we can learn from things going wrong

David McKenzie's picture

Dean Karlan and Jacob Appel have a new book out called Failing in the Field: What we can learn when field research goes wrong. It is intended to highlight research failures and what we can learn from them, sharing stories that otherwise might otherwise be told only over a drink at the end of a conference, if at all. It draws on a number of Dean’s own studies, as well as those of several other researchers who have shared stories and lessons. The book is a good short read (I finished it in an hour), and definitely worth the time for anyone involved in collecting field data or running an experiment.

Weekly links September 30: re-analysis and respective criticism, working with NGOs, off to the big city, and more…

David McKenzie's picture
  • Solomon Hsiang and Nitin Sekar respond to the guest post by Quy-Toan Do and co-authors which had re-analyzed their data to question whether a one-time legal sale of ivory had increased elephant poaching. They state “Their claims are based on a large number of statistical, coding, and inferential errors.  When we correct their analysis, we find that our original results hold for sites that report a large number of total carcasses; and the possibility that our findings are artifacts of the data-generating process that DLM propose is extremely rare under any plausible set of assumptions”.
    • We screwed up by hosting this guest post without checking that Do and co-authors had shared it with the original co-authors and had given them a chance to respond.
    • We do believe that blogs have an important role to play in discussing research (see also Andrew Gelman on this), but think Uri Simonsohn’s piece this week on how to civilly argue with someone else’s analysis has good practice ideas for both social media and refereeing – with sharing the discussion with authors beforehand when re-analysis is done being good practice. We will try to adhere to this better in the future.
    • We are waiting to see whether Do and co-authors have any further word, and plan on posting only one more summary on this after making sure both sides have iterated. We plan to avoid Elephant wars since worm wars were enough.
  • In somewhat related news, Dana Carney shows how to gracefully accept and respond to criticism over your earlier work.

Weekly links September 23: yay for airlines x2, dig out those old audit studies, how to study better, and more…

David McKenzie's picture
  • The second edition of the book Impact Evaluation in Practice by Paul Gertler, Sebastian Martinez, Patrick Premand, Laura Rawlings and Christel Vermeersch is now available. For free online! “The updated version covers the newest techniques for evaluating programs and includes state-of-the-art implementation advice, as well as an expanded set of examples and case studies that draw on recent development challenges. It also includes new material on research ethics and partnerships to conduct impact evaluation.”
  • Interesting Priceonomics piece on R.A. Fisher, and how he fought against the idea that smoking causes cancer
  • Oxfam blog post on power calculations for propensity score matching
  • The importance of airlines for research and growth:

Weekly links September 16: infrastructure myths, surveying rare populations x 2, being a development mum, and more…

David McKenzie's picture

Power Calculations for Regression Discontinuity Evaluations: Part 3

David McKenzie's picture
This is my third, and final, in a series of posts on doing power calculations for regression discontinuity (see part 1 and part 2).
Scenario 3 (SCORE DATA AVAILABLE, AT LEAST PRELIMINARY OUTCOME DATA AVAILABLE; OR SIMULATED DATA USED): The context of data being available seems less usual to me in the planning stages of an impact evaluation, but could be possible in some settings (e.g. you have the score data and administrative data on a few outcomes, and then are deciding whether to collect survey data on other outcomes). But more generally, you will be in this stage once you have collected all your data. Moreover, the methods discussed here can be used with simulated data in cases where you don’t have data.

There is then a new Stata package rdpower written by Matias Cattaneo and co-authors that can be really helpful in this scenario (thanks also to him for answering several questions I had on its use). It calculates power and sample sizes, assuming you are then going to be using the rdrobust command to analyze the data. There are two related commands here:
  • rdpower: this calculates the power, given your data and sample size for a range of different effect sizes
  • rdsampsi: this calculates the sample size you need to get a given power, given your data and that you will be analyzing it with rdrobust.

Weekly links September 9: no to cash? Machine learning for economists, climate economics, stupid regulations unchanged and more…

David McKenzie's picture

Pages