Syndicate content

Doing Development Economics at a Liberal Arts College Part One

David McKenzie's picture
With the job market coming up, me giving a talk to a great group of faculty and development students at Williams College last week, and seeing a program for the recent LACDEV conference, I thought it might be interesting to learn a bit about life as a development economist at a liberal arts college. I asked four faculty at different schools for some thoughts, thinking I might get two to agree, but was very pleased to get excellent insights from all four.

Weekly links September 28: the peril of meetings, endogenous responses mess up big data uses, what 600+ development papers tells you about our field, and more...

David McKenzie's picture

Make Your Research Known – 10 Tools to Increase Consumption of Your Research

David Evans's picture

Many researchers hope that their research will have some impact on policy. Research can impact policy directly: A policymaker uses the results of your study in making a policy decision. For direct policy impact, policymakers – or the people who advise them or the people who vote for them – have to know about your work. Research can also impact policy indirectly: Your research becomes part of a body of evidence which collectively affects future policy decisions. For indirect policy impact, other researchers have to know about your work. It is unlikely that your research will impact policy either directly or indirectly if no one knows about it.

Over the years, I’ve experimented with many ways of increasing consumption of research (together with colleagues and co-authors), and I’ve seen many other ways. Here is a menu of ten options. The point isn’t to do all of these, but rather to select those that will help you reach the audience you most want to impact.

Is your education program benefiting the most vulnerable students?

David Evans's picture

Just about every article or report on education that we read these days – and some that we’ve written – bemoan the quality of education in low- and middle-income countries. The World Bank’s World Development Report 2018 devoted an entire, well-documented chapter to “the many faces of the learning crisis.” Recent reports on education in Latin America and in Africa make the same point.

But within low- and middle-income countries, not all education is created equal, and not all students face the same challenges. As Aaron Benavot highlights, “policies found to be effective in addressing the challenges facing ‘average’ or typical learners” will not necessarily be effective in addressing those “faced by learners from marginalized groups.”

Indeed, we know that within a given classroom, there can be massive variation in learning across students. As you can see in the figure below, from a group of students in New Delhi, India, in a 9th grade class you have students reading at the 8th grade level and at the 6th grade level. For math, they’re performing at the 3rd grade level and the 5th grade level. So if an intervention increases average performance, are we helping those students who were already ahead or those who are furthest behind? (In this case, no one’s really ahead, since even the top performers are way behind grade level. But the students in the bottom 25th percentile are doubly disadvantaged – behind in learning in a low-performing school system.)

Source: World Development Report 2018, using data from Muralidharan, Singh, and Ganimian (2017).

Weekly links September 21: scholarship labels, designing for spillovers, does your paper have a bande dessinée version? And more...

David McKenzie's picture

Do impact evaluations tell us anything about reducing poverty? Vol. II: The empire stagnates

Markus Goldstein's picture
This post is coauthored with Aletheia Donald
Four years ago, Markus looked at 20 impact evaluations and wrote a post concluding that most of them didn’t have much to say about reducing poverty (where was poverty was defined as expenditure, income, and/or wealth).  This summer Shanta Devarajan asked for an update on twitter, so here it is. 

Should you oversample compliers if budget is limited and you are concerned take-up is low?

David McKenzie's picture

My colleague Bilal Zia recently released a working paper (joint with Emmanuel Hakizimfura and Douglas Randall) that reports on an experiment conducted with 200 Savings and Credit Cooperative Associations (SACCOs) in Rwanda. The experiment aimed to test two different approaches to decentralizing financial education delivery, and finds improvements are greater when Saccos get to choose which staff should be trained rather than when they are told to send the manager, a loan officer, and a board member.

One point of the paper that I thought might be of broader interest to our readers concerns the issue of what to do when you only have enough budget to survey a sample of a program’s beneficiaries, and you are concerned about getting enough compliers.

Lessons from a cash benchmarking evaluation: Authors' version

Development Impact Guest Blogger's picture

This is a guest post by Craig McIntosh and Andrew Zeitlin.

We are grateful to have this chance to speak about our experiences with USAID's pilot of benchmarking its traditional development assistance using unconditional cash transfers. Along with the companion benchmarking study that is still in the field (that one comparing a youth workforce readiness to cash) we have spent the past two and a half years working to design these head-to-head studies, and are glad to have a chance to reflect on the process. These are complex studies with many stakeholders and lots of collective agreements over communications, and our report to USAID, released yesterday, reflects that. Here, we convey our personal impressions as researchers involved in the studies.

Weekly links September 14: stealth cash vs WASH, online job boards, income-smoothing from bridges, lowering interest rates through TA, and more...

David McKenzie's picture

Declaring and diagnosing research designs

Development Impact Guest Blogger's picture

This is a guest post by Graeme Blair, Jasper Cooper, Alex Coppock, and Macartan Humphreys

Empirical social scientists spend a lot of time trying to develop really good research designs and then trying to convince readers and reviewers that their designs really are good. We think the challenges of generating and communicating designs are made harder than they need to be because (a) there is not a common understanding of what constitutes a design and (b) there is a dearth of tools for analyzing the properties of a design.

Pages