Syndicate content

Blogs

Any chance to use impact evaluations with no impact? : The Mexican Case – Guest Post by Gonzalo Hernández Licona

Gonzalo was part of a panel with David McKenzie at a recent meeting of the Impact Evaluation Network (IEN). One of the questions during this discussion was whether there were good examples of cases where impact evaluations had found null or negative results, and policymakers had actually changed policy as a result. We thought others would be interested in hearing his examples from Mexico.

It feels like a cold water shower when impact evaluations (IEs) do not show positive impacts. Those studies are neither sexy for academic publication nor for public policy use. But the fact that some IEs show no impact of certain programs or projects, it’s an important piece of information!

I would like to suggest here that if a country has an institutional and relatively credible Monitoring and Evaluation System (M&E), the chances of using IEs with no impact increase.

Weekly links June 9: the dangers of out-dangering, debating how to provide health care for the poor, fighting corruption, and more…

David McKenzie's picture
  • Milli Lake and Sarah Parkinson on the ethics of fieldwork preparedness – “It’s one of the discipline’s worst kept secrets that graduate students, in particular, feel practically unprepared for their fieldwork… We worry about an intellectual trend that increasingly rewards researchers for “out-dangering” one another (often with dubious scholarly gain). This doesn’t mean scholars should abandon fieldwork; it means that we should take the practical and ethical components of its planning and implementation more seriously. We can start by asking simple questions about first aid, check-ins, transport safety, and data protection”

Are good school principals born or can they be made?

David Evans's picture
Also available in: French | Portuguese | Arabic

Good principals can make a big difference
“It is widely believed that a good principal is the key to a successful school.” So say Branch, Hanushek, and Rivkin in their study of school principals on learning productivity. But how do you measure this? Using a database from Texas in the United States, they employ a value-added approach analogous to that used to measure performance among teachers. They control for basic information on student backgrounds (gender, ethnicity, and an indicator of poverty) as well as student test scores from the previous year. Then they ask, What happens to student learning when a school changes principals? They find that increasing principal quality by one standard deviation increases student learning by 0.11 standard deviations. Even after additional adjustments, their most conservative estimates show that “a 1-standard-deviation increase in principal quality translates into roughly 0.05 standard deviations in average student achievement gains, or nearly two months of additional learning.”

Notably, while improving teacher effectiveness affects the average performance of all of the students in his class, improving principal effectiveness affects average performance of the entire school, so the potential gains are high.

Weekly links June 2: do you need to correct your p-values for all the tests you run in your life?, nimble RCTs, the elusive entrepreneur, and more…

David McKenzie's picture

Weekly links May 26: the Chetty production function, collect priors before you work, small samples bring trouble, and more…

David McKenzie's picture

A Framework for Taking Evidence from One Location to Another

David Evans's picture
Also available in: Français

“Just because it worked in Brazil doesn’t mean it will work in Burundi.” That’s true. And hopefully obvious. But some version of this critique continues to be leveled at researchers who carry out impact evaluations around the world. Institutions vary. Levels of education vary. Cultures vary. So no, an effective program to empower girls in Uganda might not be effective in Tanzania.

Of course, policymakers get this. As Markus Goldstein put it, “Policy makers are generally not morons.   They are acutely aware of the contexts in which they operate and they generally don’t copy a program verbatim. Instead, they usually take lessons about what worked and how it worked and adapt them to their situation.”

In the latest Stanford Social Innovation Review, Mary Ann Bates and Rachel Glennerster from J-PAL propose a four-step strategy to help policy makers through that process of appropriate adaptation of results from one context to another.

Weekly links May 19: another list experiment, P&P highlights, government nudges, and more…

David McKenzie's picture
  • The papers and proceedings issue of the AER has several papers of interest to development economists, including:
    • Esther Duflo’s lecture of “The Economist as Plumber” – “details that we as economists might consider relatively uninteresting are in fact extraordinarily important in determining the final impact of a policy or a regulation, while some of the theoretical issues we worry about most may not be that relevant”…” an economist who cares about the details of policy implementation will need to pay attention to many details and complications, some of which may appear to be far below their pay grade (e.g., the font size on posters) or far beyond their competence level (e.g., the intricacy of government budgeting in a federal system).”
    • Sandip Sukhtankar has a paper on replications in development economics, part of two sessions on replication in economics.
    • Shimeles et al. on tax auditing and tax compliance experiments in Ethiopia: “Businesses subject to threats increased their profit tax payable by 38 percent, while those that received a persuasion letter increased by 32 percent, compared to the control group.”
    • 4 papers on maternal and child health in developing countries (Uganda, Kenya, India, Zambia).
  • Following up on Berk’s post on list experiments, 538 provides another example, using list experiments to identify how many Americans are atheists.
  • The Economist on how governments are using nudges – with both developed and developing country examples.
  • The equivalent to an EGOT for economists? Dave and Markus have come up with the EJAQ or REJAQ for economists who have published in all the top-4 or top-5 journals.
  • Call for papers: TCD/LSE/CEPR conference on Development economics to be held at Trinity College, Dublin on September 18-19. Imran Rasul and I are keynote speakers.

What happens when business training and capital programs get caught in the web of intrahousehold dynamics?

Markus Goldstein's picture
Two weeks ago, I blogged about a new paper by Arielle Bernhardt and coauthors which looked at the idea that when women receive a cash infusion from a program, they may give it to their husbands to invest in their business.
 

Building Grit in the Classroom and Measuring Changes in it

David McKenzie's picture

About a year ago I reviewed Angela Duckworth’s book on grit. At the time I noted that there were compelling ideas, but that two big issues were that her self-assessed 10-item Grit scale could be very gameable, and that there was really limited rigorous evidence as to whether efforts to improve grit have lasting impacts.

A cool new paper by Sule Alan, Teodora Boneva, and Seda Ertac makes excellent progress on both fronts. They conduct a large-scale experiment in Turkey with almost 3000 fourth-graders (8-10 year olds) in over 100 classrooms in 52 schools (randomization was at the school level, with 23 schools assigned to treatment).

Pages