Syndicate content

Blogs

Weekly links May 26: the Chetty production function, collect priors before you work, small samples bring trouble, and more…

David McKenzie's picture

A Framework for Taking Evidence from One Location to Another

David Evans's picture

“Just because it worked in Brazil doesn’t mean it will work in Burundi.” That’s true. And hopefully obvious. But some version of this critique continues to be leveled at researchers who carry out impact evaluations around the world. Institutions vary. Levels of education vary. Cultures vary. So no, an effective program to empower girls in Uganda might not be effective in Tanzania.

Of course, policymakers get this. As Markus Goldstein put it, “Policy makers are generally not morons.   They are acutely aware of the contexts in which they operate and they generally don’t copy a program verbatim. Instead, they usually take lessons about what worked and how it worked and adapt them to their situation.”

In the latest Stanford Social Innovation Review, Mary Ann Bates and Rachel Glennerster from J-PAL propose a four-step strategy to help policy makers through that process of appropriate adaptation of results from one context to another.

Weekly links May 19: another list experiment, P&P highlights, government nudges, and more…

David McKenzie's picture
  • The papers and proceedings issue of the AER has several papers of interest to development economists, including:
    • Esther Duflo’s lecture of “The Economist as Plumber” – “details that we as economists might consider relatively uninteresting are in fact extraordinarily important in determining the final impact of a policy or a regulation, while some of the theoretical issues we worry about most may not be that relevant”…” an economist who cares about the details of policy implementation will need to pay attention to many details and complications, some of which may appear to be far below their pay grade (e.g., the font size on posters) or far beyond their competence level (e.g., the intricacy of government budgeting in a federal system).”
    • Sandip Sukhtankar has a paper on replications in development economics, part of two sessions on replication in economics.
    • Shimeles et al. on tax auditing and tax compliance experiments in Ethiopia: “Businesses subject to threats increased their profit tax payable by 38 percent, while those that received a persuasion letter increased by 32 percent, compared to the control group.”
    • 4 papers on maternal and child health in developing countries (Uganda, Kenya, India, Zambia).
  • Following up on Berk’s post on list experiments, 538 provides another example, using list experiments to identify how many Americans are atheists.
  • The Economist on how governments are using nudges – with both developed and developing country examples.
  • The equivalent to an EGOT for economists? Dave and Markus have come up with the EJAQ or REJAQ for economists who have published in all the top-4 or top-5 journals.
  • Call for papers: TCD/LSE/CEPR conference on Development economics to be held at Trinity College, Dublin on September 18-19. Imran Rasul and I are keynote speakers.

What happens when business training and capital programs get caught in the web of intrahousehold dynamics?

Markus Goldstein's picture
Two weeks ago, I blogged about a new paper by Arielle Bernhardt and coauthors which looked at the idea that when women receive a cash infusion from a program, they may give it to their husbands to invest in their business.
 

Building Grit in the Classroom and Measuring Changes in it

David McKenzie's picture

About a year ago I reviewed Angela Duckworth’s book on grit. At the time I noted that there were compelling ideas, but that two big issues were that her self-assessed 10-item Grit scale could be very gameable, and that there was really limited rigorous evidence as to whether efforts to improve grit have lasting impacts.

A cool new paper by Sule Alan, Teodora Boneva, and Seda Ertac makes excellent progress on both fronts. They conduct a large-scale experiment in Turkey with almost 3000 fourth-graders (8-10 year olds) in over 100 classrooms in 52 schools (randomization was at the school level, with 23 schools assigned to treatment).

Weekly links May 12: the ‘stans, how publishing might hurt you, list experiment discussion, and more…

David McKenzie's picture

List Experiments for Sensitive Questions – a Methods Bleg

Berk Ozler's picture

About a year ago, I wrote a blog post on issues surrounding data collection and measurement. In it, I talked about “list experiments” for sensitive questions, about which I was not sold at the time. However, now that I have a bunch of studies going to the field at different stages of data collection, many of which are about sensitive topics in adolescent female target populations, I am paying closer attention to them. In my reading and thinking about the topic and how to implement it in our surveys, I came up with a bunch of questions surrounding the optimal implementation of these methods. In addition, there is probably more to be learned on these methods to improve them further, opening up the possibility of experimenting with them when we can. Below are a bunch of things that I am thinking about and, as we still have some time before our data collection tools are finalized, you, our readers, have a chance to help shape them with your comments and feedback.

Weekly links May 5: an econometrics bonanza, charter schools, Chinese inequality, and more…

David McKenzie's picture

Money for her or for him? Unpacking the impact of capital infusions for female enterprises

Markus Goldstein's picture
In a 2009 paper, David McKenzie and coauthors Chris Woodruff and Suresh de Mel find that giving cash grants to male entrepreneurs in Sri Lanka has a positive and significant return, while giving the same to women did not.   David followed this up with work with coauthors in Ghana that compared in-kind and cash grants for women and men.  Again, better returns for men (with in-kind working for some

A cynic’s take on papers with novel methods to improve transparency

David McKenzie's picture

What is the signal we should infer from a paper using a novel method that is marketed as a way to improve transparency in research?

I got to thinking about this issue when seeing a lot of reactions on twitter like “Awesome John List!”, “This is brilliant”,etc. about a new paper by Luigi Butera and John List that investigates in a lab experiment how cooperation in an allocation game is affected by Knightian uncertainty/ambiguity. Contrary to what the authors had expected, they find adding uncertainty increases cooperation. The bit they are getting plaudits for is then the following in the introduction:

Pages