Syndicate content

Blog links February 27th: What counts as a nudge, being efficient, debiasing, and more…

David McKenzie's picture
  • How to be efficient – excellent advice from Dan Ariely In particular I liked “A calendar should be a record of anything that needs to get done — not merely of interruptions like meetings and calls.” and “frequent email checks can temporarily lower your intelligence more than being stoned”

Why is Difference-in-Difference Estimation Still so Popular in Experimental Analysis?

Berk Ozler's picture
David McKenzie pops out from under many empirical questions that come up in my research projects, which has not yet ceased to be surprising every time it happens, despite his prolific production. The last time it happened was a teachable moment for me, so I thought I’d share it in a short post that fits nicely under our “Tools of the Trade” tag.

Blog links February 20: understandability, the replication debate continues, thoughts on the “Africa problem in economics”, and more…

David McKenzie's picture
  • A third paper in 3ie’s internal replication series is now out – along with a response from the authors (Stefan Dercon and co-authors). The author’s response is interesting for some of the issues with such replication exercises that it raises “At the outset of this exercise, we were enthusiastic, but possibly naive participants. At its end, we find it hard to shake the feeling that an activity that began as one narrowly focused on pure replication morphed – once our original findings were confirmed (save for a very minor programming error that we willingly confess to) - into a 14 month effort to find an alternative method/structure of researching the problem that would yield different results.” (See also Berk’s posts on the previous replications).
  • On the Let’s Talk Development blog, Emanuela Galasso reflects on the Chile Solidario program and how social programs can move from social protection to productive inclusion.
  • From Cornell’s Economics that really matters blog – conducting fieldwork in a conflict zone in Mexico.

Evaluating an Argentine regional tourism policy using synthetic controls: tan linda que enamora?

David McKenzie's picture
In 2003, the Argentine province of Salta launched a new tourism development policy with the explicit objective of boosting regional development. This included improving tourism and transport infrastructure, restoring historical and cultural heritage areas, tax credits for the construction and remodeling of hotels, and a major promotion campaign at the national and international levels.

Measuring Yields from Space

Florence Kondylis's picture

This post is co-authored with Marshall Burke.
One morning last August a number of economists, engineers, Silicon Valley players, donors, and policymakers met on the UC-Berkeley campus to discuss frontier topics in measuring development outcomes. The idea behind the event was not that economists could ask experts to create measurement tools they need, but instead that measurement scientists could tell economists about what was going on at the frontier of measuring development-related outcomes. Instead of waiting for pilot results, we decided to blog about some of these ideas and get inputs from Development Impact readers. In this series, we start with recent progress on measuring (“remote-sensing”) agricultural crop yields from space.

Do policy briefs change beliefs?

David Evans's picture
Impact evaluation evidence in developing countries is growing. In case we need evidence of that, here are two pieces: First, the cumulative number of IEs (by publication date) on improving learning from an array of systematic reviews published over the last couple of years, compiled by a colleague and I. Second, the cumulative number of IEs on reducing maternal and child mortality, from a recent systematic review (IEG 2013).

Weekly links Feb 6, 2015: research transparency, reliable 9% response rates, protests as a constraint to power, and more…

David McKenzie's picture
  • Ted Miguel is teaching a course on research transparency methods in the social sciences. Berkeley is posting the lectures on YouTube. Lecture 1 is now up.
  • Chris Blattman on a paper looking at how the tendency to publish null results varies by scientific field.
  • In Science, Jorge Guzman and Scott Stern on predicting entrepreneurial quality
  • Ben Olken’s forthcoming JEP paper on pre-analysis plans in economics: this is a very nuanced and well-written piece, discussing both pros and cons – it notes a reaction I am increasingly persuaded by, which is that RCTs don’t really seem to have a lot of data-mining problems in the first place…and also that “most of these papers are too complicated to be fully pre-specified ex-ante”…main conclusion is benefits are highest from pre-specifying just a few key primary outcomes, and for specifying heterogeneity analysis and econometric specifications – less clear for specifying causal chain/mechanisms/secondary outcomes which can too easily get too complicated/conditional.

Tools of the Trade: a joint test of orthogonality when testing for balance

David McKenzie's picture
This is a very simple (and for once short) post, but since I have been asked this question quite a few times by people who are new to doing experiments, I figured it would be worth posting. It is also useful for non-experimental comparisons of a treatment and a control group.