Syndicate content

statistical power

A Curated List of Our Postings on Technical Topics – Your One-Stop Shop for Methodology

David McKenzie's picture
This is a curated list of our technical postings, to serve as a one-stop shop for your technical reading. I’ve focused here on our posts on methodological issues in impact evaluation – we also have a whole lot of posts on how to conduct surveys and measure certain concepts that I’ll leave for another time. Updated August 20, 2015.
Random Assignment

From my mailbox: should I work with only a subsample of my control group if I have big take-up problems?

David McKenzie's picture
Over the past month I’ve received several versions of the same question, so thought it might be useful to post about it.
Here’s one version:
I have a question about an experiment in which we had a very big problem getting the individuals in the treatment group to take-up the treatment. Therefore we now have a treatment much smaller than the control. For efficiency reasons does it still make sense to survey all the control group, or should we take a random draw in order to have an equal number of treated and control?
And another version

Yikes, not only is it hard for us to do experiments with firms, it can be really hard for firms to experiment on themselves

David McKenzie's picture
I came across a new working paper written by researchers at Google and Microsoft with the title “on the near impossibility of measuring the returns to advertising”. They begin by noting the astounding statistic that annual US advertising revenue is $173 billion, or about $500 per American per year. That’s right, more than the GDP per capita of countries like Burundi, Madagascar and Eritrea is spent just on advertising!

Does Business Training Work?

Markus Goldstein's picture

What do we really know about how to build business capacity?    A nice new paper by David McKenzie and Chris Woodruff takes a look at the evidence on business training programs – one of the more common tools used to build up small and medium enterprises.   They do some work to make the papers somewhat comparable and this helps us to add up the totality of the lessons.   What’s more, as David and Chris go through the evidence, they come up with a lot of interestin

When Randomization Goes Wrong...

Berk Ozler's picture

An important, and stressful, part of the job when conducting studies in the field is managing the number of things that do not go according to plan. Markus, in his series of field notes, has written about these (see, for example, here and here) roller coaster rides we call impact evaluations.