Syndicate content

Doing Experiments with Socially Good but Privately Bad Treatments

David McKenzie's picture
Most experiments in development economics involve giving the treatment group something they want (e.g. cash, health care, schooling for their kids) or at least offering something they might want and can choose whether or not to take up (e.g. business training, financial education). Indeed among the most common justifications for randomization is that there is not enough of the treatment for everyone who wants it, leading to oversubscription or randomized phase-in designs.

Do financial incentives undermine the motivation of public sector workers? Maybe, but where is the evidence from the field?

Jed Friedman's picture
These past weeks I’ve visited several southern African nations to assist on-going evaluations of health sector pay-for-performance reforms. It’s been a whirlwind of government meetings, field trips, and periods of data crunching. We’ve made good progress and also discovered roadblocks – in other words business as usual in this line of work. One qualitative data point has stayed with me throughout these weeks, the paraphrased words of one clinic worker: “I like this new program because it makes me feel that the people in charge of the system care about us.”

Defining Conditional Cash Transfer Programs: An Unconditional Mess

Berk Ozler's picture
Many policymakers are interested in the role of conditions in cash transfer programs. Do they improve outcomes of interest more than money alone? Are there trade-offs? Is there a role for conditions for political rather than technocratic reasons? It’s easy to extend the list of questions for a good while. However, before one can get to these questions, there is a much more basic question that needs to be answered (for any policymaker contemplating running one of these programs at any level): “What do you mean by a conditional (or unconditional) cash transfer program?”

Using spatial variation in program performance to identify causal impact

Jed Friedman's picture
I’ve read several research proposals in the past few months, as well engaged in discussions, that touch on the same question: how to use the spatial variation in a program’s intensity to evaluate its causal impact. Since these proposals and conversations all mentioned the same fairly recent paper by Markus Frolich and Michael Lechner, I eagerly sat down to read it.

The Illusion of Information Campaigns: Just because people don’t know about your policy, it doesn’t mean that an information campaign is needed

David McKenzie's picture
How many points do you need to qualify to migrate to Australia? What is the cost of applying? How much money do you need to set up a bank account in the Cayman Islands? What is the procedure for getting money out of these accounts when you want to spend it?

May 3 Links: Finding your “thing” as a researcher, programs for female self-employment that work, and more…

David McKenzie's picture
  • From the indecision blog – as a young researcher, how do you find out what your “thing” is, that is, your research agenda -  interesting hypothesis that for many researchers research preferences “reveal themselves”.
  • From the 3ie blog – does economics need a more systematic approach to replication to be considered a hard science? – interesting link contained within to an AER editor’s report on the replication policy there.
  • New results published in the New England Journal of Medicine from the Oregon Health Experiment look at impacts of access to Medicaid on simple health measures like cholesterol and blood pressure (see our discussion of the original set of results here), and for summaries of the new results either the Washington Post Wonkblog or NPR). One of the big measurement issues is of course that even with a sample of approx 6,000 treated and 6,000 control, it is not clear there are enough cases over 2 years of the sort of health events that easier access to medical care can fix.
  • After Markus’s post this week showing how a package of grants and training helped women grow small businesses in Bangladesh, Chris Blattman has a post on new results from an evaluation he did in Uganda, which also finds positive impacts of training and grants on getting women to start businesses. We’ll wait for a working paper to render our thoughts on this – there are worrying issues (phased in randomization where the control group was guaranteed treatment at a known later date, potentially causing them to delay current business activities) and intriguing-sounding findings (general equilibrium effects on village economies) that pique my interest.

Pages