Syndicate content

March 2019

Weekly links March 29: dynamic experimentation, making data accessible and transparent, summaries of a gazillion conference papers, assessing economic significance, and more...

David McKenzie's picture
  • Max Kasy blogs about his new work on designing multiple experiments for policy choice – “Trying to identify the best policy is different from estimating the precise impact of every individual policy: as long as we can identify the best policy, we do not care about the precise impacts of inferior policies. Yet, despite this, most experiments follow protocols that are designed to figure out the impact of every policy, even the obviously inferior ones.... The key to our proposal is staging: rather than running the experiment all at once, we propose that researchers start by running a first round of the experiment with a smaller number of participants. Based on this first round, you will be able to identify which treatments are clearly not likely to be the best. You can then go on to run another round of the experiment where you focus attention on those treatments that performed well in the first round. This way you will end up with a lot more observations to distinguish between the best performing treatments.” Sounds very cool, but it does depend on short-term outcomes being your main objects of interest.
  • Why researchers should publish their data – the J-PAL blog provides some stats on the increase in data sharing requirements and practices, and the intriguing claim that “papers in top economics and political science journals with public data and code are cited between 30-45 percent more often than papers without public data and code” – which is based on preliminary work that uses changes in journal data availability requirements to attempt to make this a causal statement.

Do I need to “recruit” a control group for my trial?

Berk Ozler's picture

An article titled “Synthetic control arms can save time and money in clinical trials” that I read last month discusses how drug trials can be faster and cheaper by using data collected from real world patients instead of recruiting a control group, hence the term “synthetic controls.”[1] Proliferation of digital data in the health sector, such as “…health data generated during routine care, including electronic health records; administrative claims data; patient-generated data from fitness trackers or home medical equipment; disease registries; and historical clinical trial data” makes such designs an increasingly feasible possibility. Combined with the fact that large amounts of time and money are spent on clinical trials, the option is attractive to researchers, drug companies, and patients awaiting new treatments alike.

The work of measuring work

Kathleen Beegle's picture

Measurement is on my mind. Partly because of the passing of Alan Krueger (credited with having a major influence on the development of empirical research – notably his influential book Myth and Measurement). But also because a couple of weeks ago, I attended an all-day brainstorming meeting on “Methods and Measurement” hosted by Global Poverty Research Lab at Northwestern University and IPA. The workshop covered a range of topics on gaps and innovations in research methods related to measurement, such as: integrating data sources and applying new methods (such as satellite data and machine learning combined with household surveys to get improved yield estimates), untangling socioeconomic complex data (such as mapping social networks), crafting measurement of concepts where we lack consensus (e.g. financial health), and bringing new tech into our survey efforts (using smartphones, physical trackers, etc.).

Weekly links March 22: improving girl’s education, should project management training be done more, a better binscatter, and more...

David McKenzie's picture
  • Dave starts his blogging in his new home: Over at the CGD blog, Dave Evans and Fei Yuan discuss their new work reviewing 250+ interventions to try and figure out how best to help girls succeed in school. The key finding is that to improve learning for girls, general interventions that improve pedagogy for all students seem to be most effective.
  • Should Professors (and other researchers) be given project management training? Interesting thread by @FaiolaLabUCI with some suggestions of different courses and tools out there to manage projects including materials for a U Wisconsin graduate workshop on project management, suggestions for different software that can help, and more. Anyone done or know of a managing field projects project management workshop/course?
  • Also on twitter, I noticed the lack of economics papers (none in the last 5 months) at the journal Science, and asked the Social Sciences editor for an explanation. Tage Rai very kindly replied, noting some of the challenges with econ papers, authors and referees, and what he is doing to lower barriers to trying this avenue.

Weekly links March 15: yes, research departments are needed; “after elections”, experiences with registered reports, and more...

David McKenzie's picture
  • Why the World Bank needs a research department: Penny Goldberg offers a strong rationale on Let’s Talk Development
  • On VoxDev, Battaglia, Gulesci and Madestam summarize their work on flexible credit contracts, which is one my favorite recent papers – they worked with BRAC in Bangladesh to offer borrowers a 12 month loan, with borrowers having the option to delay up to two monthly repayments at any time during the loan cycle. This appears to be a win-win, with the borrowers being more likely to grow their firms, and the bank experiencing lower default and higher client retention. However, although the post doesn’t discuss it, the product seemed less successful in helping larger SMEs.
  • Political business cycles in Africa – Rachel Strohm  notes a Quartz Africa story on a phenomenon that has held up a number of my impact evaluations – “Having contracts stalled and major projects abandoned is “very common”... The uncertainty is also magnified because newly-elected administrations could take months to form a cabinet and appoint heads of key agencies... as a bulk of voters travel to their ancestral homes to cast their ballot, businesses are forced to shutter or maintain skeletal operations... [this] has even made phrases like “after elections” a colloquial mainstay”.
  • The JDE interviews Eric Edmonds about his experience with the registered report process: “I thought I wrote really good pre-analysis plans and then I saw the template and realized, no, I write really bad pre-analysis plans too. I think just the act of providing that template to give some kind of standardization, is a great service to the profession... I think we need to be in a place where we have pre-analysis plans and we review them, and when we choose to deviate from them in our analysis, we're just able to be clear and to talk about why that is.” (h/t Ryan Edwards)

Spatial Jumps

Florence Kondylis's picture

Evaluating Infrastructure Development
Investment in infrastructure is a key lever for economic growth in developing countries; to this end, World Bank financing for infrastructure is roughly 40% of its total commitments. Knowing the impact of these investments is therefore crucial for policy, but estimating the impact of these investments is difficult: Infrastructure is frequently targeted towards regions where growth is anticipated and coupled with complementary investments. Therefore, separating the impacts of any one investment from others or even from pre-existing growth trends is hard. This explains why development economists are pretty obsessed with finding ways to estimate the causal impact of infrastructure projects, which has led to many creative solutions. One possible option is to use spatial jumps.

Signed referee reports: a one-year follow-up

Berk Ozler's picture

Last January, I decided to start signing my referee reports and wrote a blog post about it. Partly because it felt like something I should do and partly because it was a commitment device to try to useful but critical referee reports without sounding mean. Economics suffers from many ills that it has been trying to address, and while mean and overreaching referee reports are not at the top of the list, they are something everyone has experienced and complained about at least once…So, now that I have been signing referee reports for about 15 months, how has it gone?

Do conditional cash transfers empower women?

Markus Goldstein's picture
A couple of weeks ago, I blogged about a new approach to measuring within household decision making.   Continuing in that vein, I was recently reading a paper (ungated version here) by Almas, Armand, Attanasio, and Carneiro which offers a really n

Judge leniency IV designs: Now not just for Crime Studies

David McKenzie's picture

For quite a few reasons, many researchers have become increasingly skeptical of a lot of attempts to use instrumental variables for causal estimation. However, one type of instrument that has enjoyed a surge in popularity is what is known as the “judge leniency” design. It has particularly caught my attention recently through a couple of applications where the judges are not actually court judges, and it seems like there could be quite a few other applications out there. I therefore thought I’d summarize this design, these recent applications, and key things to watch out for.

The basic judge leniency set-up.
This design appears to have gained first prominence through studies which look at the impact of different types of experience with the criminal legal system. A classic example is Kling (2006, AER), who wants to look at the impact of incarceration length (S) on subsequent labor outcomes (Y). That is, he would like to estimate an equation like:

Y(i) = a + bS(i) + c’X(i)+ e(i)

The concern, of course, is that even controlling for observable differences X(i), people who get longer prison sentences might be different from those who get given shorter sentences, in ways that matter for future labor earnings.

Weekly links March 1: the path from development economics to philanthropy, nitty-gritty of survey implementation, blame your manager for your low productivity, and more...

David McKenzie's picture