Syndicate content

randomization

A Curated List of Our Postings on Technical Topics – Your One-Stop Shop for Methodology

David McKenzie's picture
This is a curated list of our technical postings, to serve as a one-stop shop for your technical reading. I’ve focused here on our posts on methodological issues in impact evaluation – we also have a whole lot of posts on how to conduct surveys and measure certain concepts curated here. In lieu of our regular links this week, it is updated up to October 25, 2018
General

Some theory on experimental design…with insight into those who run them

Markus Goldstein's picture
A nice new paper by Abhijit Banerjee, Sylvain Chassang, and Erik Snowberg brings theory to how we choose to do evaluations – with some interesting insights into those of us who do them.  It’s elegantly written, and full of interesting examples and thought experiments – well worth a read beyond the injustice I will do it here.  

Be an Optimista, not a Randomista (when you have small samples)

Berk Ozler's picture
We are often in a world where we are allowed to randomly assign a treatment to assess its efficacy, but the number of subjects available for the study is small. This could be because the treatment (and its study) is very expensive – often the case in medical experiments – or because the condition we’re trying to treat is rare leaving us with two few subjects or because the units we’re trying to treat are like districts or hospitals, of which there are only so many in the country/region of interest.

Ethical Validity Response #2: Is random assignment really that unacceptable or uncommon?

David McKenzie's picture
In his post this week on ethical validity in research, Martin Ravallion writes:
 “Scaled-up programs almost never use randomized assignment so the RCT has a different assignment mechanism, and this may be contested ethically even when the full program is fine.”

Lotteries aren’t so exotic

Taking Ethical Validity Seriously

Martin Ravallion's picture
More thought has been given to the validity of the conclusions drawn from development impact evaluations than to the ethical validity of how the evaluations were done. This is not an issue for all evaluations. Sometimes an impact evaluation is built into an existing program such that nothing changes about how the program works. The evaluation takes as given the way the program assigns its benefits. So if the program is deemed to be ethically acceptable then this can be presumed to also hold for the method of evaluation.

Pages