Syndicate content

Blogs

Definitions in RCTs with interference

Berk Ozler's picture

On May 25, I attended a workshop organized by the Harvard School of Public Health, titled “Causal Inference with Highly Dependent Data in Communicable Diseases Research.” I got to meet many of the “who’s who” of this literature from the fields of biostatistics, public health, and political science, among whom was Elizabeth Halloran, who co-authored this paper with Michael Hudgens – one of the more influential papers in the field.

Weekly links June 3: Small begets big, expert predictions, process evaluation, measurement, and more…

David McKenzie's picture

Towards a survey methodology methodology: Guest post by Andrew Dillon

When I was a graduate student and setting off on my first data collection project, my advisors pointed me to the ‘Blue Books’ to provide advice on how to make survey design choices.  The Glewwe and Grosh volumes are still an incredibly useful resource on multi-topic household survey design.  Since the publication of this volume, the rise of panel data collection, increasingly in the form of randomized control trials, has prompted a discussion abo

Book Review: Grit – Takeaways for Development Economists and Parents

David McKenzie's picture

Angela Duckworth’s new book Grit: The Power of Passion and Perseverance has been launched with great fanfare, reaching number two on the NY Times Nonfiction bestseller list. She recently gave a very polished and smooth book launch talk to a packed audience at the World Bank, and is working with World Bank colleagues on improving grit in classrooms in Macedonia. Billed as giving “the secret to outstanding achievement” I was interested in reading the book as both a researcher and a parent. I thought I’d continue my book reviews series with some thoughts on the book.

Weekly Links, May 27: Conscious insects, good data collection practices, all male panels, and more...

Berk Ozler's picture
  • On selecting what variables to gather data for in your impact evaluation: Carneiro et al. have a new paper out – “Optimal Data Collection for Randomized Control Trials” – which argues that if you have a household survey or census in advance, you can use an algorithm to select the right covariates, potentially reducing data collection costs or improving precision substantially.

Electronic versus paper-based data collection: reviewing the debate

This post was co-authored by Sacha Dray, Felipe Dunsch, and Marcus Holmlund.

Impact evaluation needs data, and often research teams collect this from scratch. Raw data fresh from the field is a bit like dirty laundry: it needs cleaning. Some stains are unavoidable – we all spill wine/sauce/coffee on ourselves from time to time, which is mildly frustrating but easily discarded as a fact of life, a random occurrence. But as these occurrences become regular we might begin to ask ourselves whether something is systematically wrong.

Issues of data collection and measurement

Berk Ozler's picture
About five years ago, soon after we started this blog, I wrote a blog post titled “Economists have experiments figured out. What’s next? (Hint: It’s Measurement)” Soon after the post, I had folks from IPA email me saying we should experiment with some important measurement issues, making use of IPA’s network of studies around the world.

Weekly links May 20: AEA P&P Special Edition

David McKenzie's picture
The latest AEA papers and proceedings has a number of interesting papers:
  • In the Richard T. Ely lecture, John Campbell discusses the challenge of consumer financial regulation – he distinguishes 5 dimensions of financial ignorance many households exhibit: 1) ignorance of even the most basic financial concepts (financial illiteracy); 2) ignorance of contract terms (such as not knowing about the fees build into credit cards or when mortgage interest rates can change); 3) ignorance of financial history – relying too much on own experiences and the recent past; 4) ignorance of self- a lot of financially illiterate people are over-confident about their abilities; and 5) ignorance of incentives, strategy and equilibrium – failure to take account of incentives faced by other parties to transactions.  Given these problems, and the limits of financial education and disclosure requirements to fix them, he discusses what financial regulation is needed: “consumer financial regulation must confront the trade-off between the benefits of intervention to behavioral agents, and the costs to rational agents….the task for economists is to confront this trade-off explicitly”

Some tips on doing impact evaluations in conflict-affected areas

Markus Goldstein's picture
I’ve recently been doing some work with my team at the Gender Innovation Lab on data we collected that was interrupted by conflict (and by conflict here I mean the armed variety, between organized groups).  This got me thinking about how doing an impact evaluation in a conflict situation is different and so I reached out to a number of people - Chris Blattman, Andrew Beath, Niklas Buehren, Shubha Chakravarty, and Macartan Humphreys – for their views (collectively they’re “the crowd” in the rest of this post).   What follows are a few of my observations and a heck of a lot of theirs (and of cou

What’s New in Measuring Subjective Expectations?

David McKenzie's picture

Last week I attended a workshop on Subjective Expectations at the New York Fed. There were 24 new papers on using subjective probabilities and subjective expectations in both developed and developing country settings. I thought I’d summarize some of the things I learned or that I thought most of interest to me or potentially our readers:

Subjective Expectations don’t provide a substitute for impact evaluation
I presented a new paper I have that is based on the large business plan competition I conducted an impact evaluation of in Nigeria.  Three years after applying for the program, I elicited expectations from the treatment group (competition winners) of what their businesses would be like had they not won, and from the control group of what their businesses would have been like had they won. The key question of interest is whether these individuals can form accurate counterfactuals. If they could, this would allow us a way to measure impacts of programs without control groups (just ask the treated for counterfactuals), and to derive individual-level treatment effects. Unfortunately the results show neither the treatment nor control group can form accurate counterfactuals. Both overestimate how important the program was for businesses: the treatment group thinks they would be doing worse off if they had lost than the control group actually is doing, while the control group thinks they would be doing much better than the treatment group is actually doing. In a dynamic environment, where businesses are changing rapidly, it doesn’t seem that subjective expectations can offer a substitute for impact evaluation counterfactuals.

Pages