Syndicate content

February 2016

Did you do your power calculations using standard deviations? Do them again...

Berk Ozler's picture

As the number of RCTs increase, it’s more common to see ex ante power calculations in study proposals. More often than not, you’ll see a statement like this: “The sample size is K clusters and n households per cluster. With this sample, the minimum detectable effect (MDE) is 0.3 standard deviations.” This, I think, is typically insufficient and can lead to wasteful spending on data collection or misallocation of resources for a given budget.

Weekly links February 19: field experiments in accounting, and the legal profession’s resistance to RCTs, when won’t scientists move?, and more…

David McKenzie's picture
  •  On the 3ie blog, Manny Jimenez notes the glaring omission of evaluations of private sector programs during last year’s “Year of Evaluation”
  • On the Conversation, ideas42 shares what insights from behavioral economics tell us about how to help people with their finances
  • Floyd and List on using field experiments in accounting and finance: with a recommendation to work in developing countries because the potential for randomization is higher, the firms aren’t as big, and the setting is less complex.
  • Greiner and Matthews on the (limited) use of RCTs in the legal profession in the U.S. “The intensity of the United States legal profession’s resistance to the RCT is such that, viewed individually, each law RCT appears to be a unicorn, a magical creation with no origin story that appears briefly in a larger setting and then fades away.” They find more than 50 RCTs, but note that “what looking we were able to do generated no evidence that the results of an RCT in the United States legal profession were actually used, in the sense that a program or policy changed because of the study’s results.”…” even when researchers have been able to field RCTs in the United States legal profession, lawyers and judges sometimes undermined them. And the lawyers and judges who did so appeared to have a common motive: certainty as to the “right” answer.”

What’s in a title? Signaling external validity through paper titles in development economics

David Evans's picture

External validity is a recurring concern in impact evaluation: How applicable is what I learn in Benin or in Pakistan to some other country? There are a host of important technical issues around external validity, but at some level, policy makers and technocrats in Country A examine the evidence from Country B and think about how likely it is to apply in Country A. But how likely are they to consider the evidence from Country B in the first place?

Weekly links February 12: All-knowing gods and cheating, Kenya lab experiments, Dads on leave, and more…

David McKenzie's picture

Tools of the Trade: The Regression Kink Design

David McKenzie's picture

Regression Discontinuity designs have become a popular addition to the impact evaluation toolkit, and offer a visually appealing way of demonstrating the impact of a program around a cutoff. An extension of this approach which is growing in usage is the regression kink design(RKD). I’ve never estimated one of these, and am not an expert, but thought it might be useful to try to provide an introduction to this approach along with some links that people can then follow-up on if they want to implement it.

Weekly links February 5: the future of the World Bank, education reforms, nutrition evidence, and more…

David McKenzie's picture
  • The latest Journal of Economic Perspectives has two papers on the role of the World Bank: Clemens and Kremer on its role in facilitating international agreements to reduce poverty; and Ravallion on the role as a knowledge bank. Clemens and Kremer have a nice list of policy areas where developing countries have dramatically changed policies following World Bank involvement and conclude that “While it is impossible to quantify the Bank’s policy influence in a precise way, our judgment is that Bank donors are getting a tremendous amount of policy influence with their limited funding. This influence comes both through deals that link Bank finance to policy reform and through the Bank’s soft power. For this reason, allocating more resources to the Bank would be desirable.”
  • The JEP also has a nice summary by Larry Katz of Roland Fryer’s work.
  • The wonkblog on how much evidence there is (or is not) behind nutrition guidelines, and how evidence interacts with public policy demands – and of the difficulties of using RCTs in this context but also the dangers of veering towards nutritional nihilism
  • Finally, if you wonder why your emails don’t get replied to, here is PhD comics

Is My NGO Having a Positive Impact?

David Evans's picture
Also available in: Português

This post is jointly authored by David Evans and Bruce Wydick.

A daunting question faced by many non-government organizations (NGOs) involved in poverty work is—after all the fundraising, logistical work, direct work with the poor, and accounting is all done—one naturally wonders: Is my NGO having a positive impact? Indeed, as a recent Guardian article highlighted, “If the [NGO] sector wants to properly serve local populations, it needs to improve how it collects evidence.”  Donors are also increasingly demanding evidence of impact from NGOs, no longer just the large funders, but the small individual donors as well. 
 

Responses to the policymaker complaint that “randomized experiments take too much time”

David McKenzie's picture

There has obviously been a large increase in the number of rigorous impact evaluations taking place of World Bank projects over the past decade, including increasing use of randomized experiments. But one comment/complaint of a number of operational staff and government policymakers is still that “randomized experiments take too much time”.  In order to avoid repeating myself so often in responding to this, I thought I’d provide some responses on this point here.