Syndicate content

Blogs

What’s new in education research? Impact evaluations and measurement – March round-up

David Evans's picture



Here is a curated round-up of recent research on education in low- and middle-income countries, with a few findings from high-income countries that I found relevant. All are from the last few months, since my last round-up.

If I’m missing recent articles that you’ve found useful, please add them in the comments!

A pre-analysis plan is the only way to take your p-value at face-value

Berk Ozler's picture

Andrew Gelman has a post from last week that discusses the value of preregistration of studies as being akin to the value of random sampling and RCTs that allow you to make inferences without relying on untestable assumptions. His argument, which is nicely described in this paper, is that we don’t need to assume nefarious practices by study authors, such as specification searching, selective reporting, etc. to worry about the p-value reported in the paper we’re reading being correct.

Weekly links March 10: Ex post power calcs ok? Indian reforms, good and bad policies, and more…

David McKenzie's picture
  • Andrew Gelman argues that it can make sense to do design analysis/power calculations after the data have been collected – but he also makes clear how NOT to do this (e.g. if a study with a small sample and noisy measurement finds a statistically significant increase of 40% in profits, don’t then see whether it has power to detect a 40% increase – instead you should be looking for the probability the treatment effect is of the wrong sign, or that the magnitude is overestimated, and should be basing the effect size you examine power for on external information). They have an R function retrodesign() to do these calculations.
  • Annie Lowrey interviews Angus Deaton in the Atlantic, and discusses whether it is better to be poor in the Mississippi Delta or in Bangladesh, opioid addiction, and the class of President Obama.

Can you help some firms without hurting others? Yes, in a new Kenyan business training evaluation

David McKenzie's picture

There are a multitude of government programs that directly try to help particular firms to grow. Business training is one of the most common forms of such support. A key concern when thinking about the impacts of such programs is whether any gains to participating firms come at the expense of their market competitors. E.g. perhaps you train some businesses to market their products slightly better, causing customers to abandon their competitors and simply reallocate which businesses sell the product. This reallocation can still be economically beneficial if it improves allocative efficiency, but failure to account for the losses to untrained firms would cause you to overestimate the overall program impact. This is a problem for most impact evaluations, which randomize at the individual level which firms get to participate in a program.

In a new working paper, I report on a business training experiment I ran with the ILO in Kenya, which was designed to measure these spillovers. We find over a three-year period that trained firms are able to sell more, without their competitors selling less – by diversifying the set of products they produce and building underdeveloped markets.

Weekly links March 3: financial literacy done right, e-voting, private vs public schooling, and more…

David McKenzie's picture

Fact checking universal basic income: can we transfer our way out of poverty?

Berk Ozler's picture
New York Times published an article last week, titled “The Future of Not Working.” In it, Annie Lowrie discusses the universal basic income experiments in Kenya by GiveDirectly: no surprise there: you can look forward to more pieces in other popular outlets very soon, as soon as they return from the same villages visited by the Times.

Weekly links Feb 24: school spending matters, nudging financial health, cash transfers bring giant snakes and blood magic, and more…

David McKenzie's picture
  • On the 74 million blog, interview with Kirabo Jackson about the importance of school spending and other education-related discussion: “In casual conversation with most economists, they would say, “Yeah, yeah, we know that school spending doesn’t matter.” I sort of started from that standpoint and thought, Let me look at the literature and see what the evidence base is for that statement. As I kept on looking through, it became pretty clear that the evidence supporting that idea was pretty weak.” Also discussion on the need to measure things beyond test scores.
  • IPA has a nice little booklet on nudges for financial health – a quick summary of the evidence for commitment devices, opt-out defaults, and reminders.

The State of Development Journals 2017: Quality, Acceptance Rates, and Review Times

David McKenzie's picture
I recently became a co-editor at the World Bank Economic Review, and was surprised to learn how low the acceptance rate is for submitted papers. The American Economic Review and other AEA journals such as the AEJ Applied publish annual editor reports in which key information on acceptance rates and review times are made publicly available, but this information is not there for development economics journals.

Pages