Syndicate content


Weekly links November 6: 5 years of nudging, peer effects, not enough news in New Zealand again, and more…

David McKenzie's picture
  • The Behavioral Insights Team (aka Nudge unit) turns 5 – Psych report interview discusses the achievements and where they plan to go “two things I would point to that, personally, I am most proud of. The first is that I think we can say we have changed the way in which policy is made in Whitehall. People think about drawing on ideas from the behavioral sciences in a way that five years ago almost nobody did. Secondly, people now think about using randomized controlled trials as one of the policy tools that can be used to find out whether or not something works. Again, that was just not considered to be part of a policymaker’s toolbox five years ago. So rather than pointing to the successes of the interventions, I think I’m most proud of the fact that we’ve started to change the mindsets of policymakers in the UK government.”

“Fixed in our ways” – Our stubborn personalities can pose a challenge for subjective welfare data

Jed Friedman's picture

An increasing number of economists analyze subjective welfare data – which records a subject’s “happiness” or “life satisfaction” – as a complement to more traditional money-based measures of wellbeing such as income or consumption. Both the promise and the pitfalls of subjective welfare (SW) measures have been widely discussed, including in this blog here and here and here. One major challenge is the concern that fixed personal characteristics (such as someone’s “natural optimism”) determine SW responses to a far larger degree than time-varying economic factors. If that is the case then the usefulness of SW data for informing economic policy is not clear. Now two recent papers teach us more about the interpretive difficulties of SW in the presence of fixed individual characteristics.

Finally a matching grant evaluation that worked…at least until a war intervened

David McKenzie's picture
Several years ago I was among a group of researchers at the World Bank who all tried to conduct randomized experiments of matching grant projects (where the government funds part of the cost of firms innovating or upgrading technology and the firm the other part). Strikingly we tried on seven projects to implement an RCT and each time failed, mostly because of an insufficient number of applicants.

Weekly links October 30: how to decide on whether you have a zero effect, the impact of becoming a (swiss) citizen, and more…

David McKenzie's picture

It’s that time of the year (no, not Halloween): Submissions open for our annual "Blog Your Job Market Paper" series

Berk Ozler's picture

We are pleased to launch for the fifth year a call for PhD students and others on the job market to blog their job market paper on Development Impact.  We welcome blog posts on anything related to empirical development work, impact evaluation, or measurement. For examples, you can see posts from 20142013 and 2012. We will follow a slightly altered process from the previous years, with the main difference being a hard deadline for submissions rather than rolling submissions:

We will start accepting submissions immediately until midnight on Monday, November 23, with the goal of publishing a couple before Thanksgiving and then about 6-8 more in December when people are deciding who to interview. We will not accept any submissions after the deadline. We will also do some more refereeing this year, which might imply a slightly lower success rate than previous years (but still better than 50%). Below are the rules that you must follow, followed by some guidance/tips you should follow:

Weekly links October 23: popularizing research, partial identification, celebrating ideas-led growth, and more…

David McKenzie's picture

Notes from the field: October edition

Markus Goldstein's picture
In our continued series on experiences in implementing impact evaluations in the field, here are a couple of observations from my recent experiences in the field on some enterprise related impact evaluations I am working on:
  • The case of the fired up implementers.One of the evaluations we are working on is comparing two different types of business training – with both being delivered by the same service providers.Apparently the training of the trainers worked too well; in at least one location the trainers were so entrepreneurially energized by the training that they developed their own hybrid model that combines the two (yes, there already was a third arm where folks get both).This reminded me the importance of always knowing (at multiple points in time), as best you can, what is actually being implemented.