Often there are many steps or stages between the starting point of some intervention and its ultimate goal, and at each step, people can drop out. The result can be extremely low power to measure impacts on this end outcome, even though we might be able to detect impacts on the intermediate steps. This post illustrates this point, with the goal of making clear the importance of trying to measure intermediate outcomes, and concludes with suggestions of ways to partially overcome this problem.
- A webcast of the AEA panel on “publishing in economics journals: the curse of the top 5” (h/t @DurRobert) – Heckman, Akelof, Deaton, Fudenberg and Hansen discuss. Some interesting discussion and comments – Deaton notes he didn’t have any papers rejected until he was famous; Heckman had a lot of data, including this one which shows (first column) which journals account for most dissemination of the ideas of the top development economists – with WBER number 1:
Here’s one version:
I have a question about an experiment in which we had a very big problem getting the individuals in the treatment group to take-up the treatment. Therefore we now have a treatment much smaller than the control. For efficiency reasons does it still make sense to survey all the control group, or should we take a random draw in order to have an equal number of treated and control?
And another version
What do we really know about how to build business capacity? A nice new paper by David McKenzie and Chris Woodruff takes a look at the evidence on business training programs – one of the more common tools used to build up small and medium enterprises. They do some work to make the papers somewhat comparable and this helps us to add up the totality of the lessons. What’s more, as David and Chris go through the evidence, they come up with a lot of interestin