Today I wanted to take the opportunity to talk about a new initiative that the Africa Region and the Research Group at the World Bank are launching today. The idea here is that we don't know enough about how to effectively address the underlying causes of gender inequality. Let me start by explaining what I mean by underlying causes. Take the case of female farmers. There is a lot of literature out there which shows that women have lower agricultural yields than men. And some of it shows that this is because women have lo
How can we better design ICT programs for development and evaluate their impact on improving peoples’ well-being? A new approach, the Alternative Evaluation Framework (AEF) takes into account multiple dimensions of peoples’ economic, social and political lives rather than simply focusing on access, expenditure and infrastructure of ICT tools. This new approach is presented in How-To Notes, Valuing Information: A Framework for Evaluating the Impact of ICT Programs, authored by Bjorn-Soren Gigler, a Senior Governance Specialist at the World Bank Institute’s Innovation Practice.
Guest post from ace evaluator Dr Karl Hughes (right, in the field. Literally.)
Just over a year ago now, I wrote a blog featured on FP2P – Can we demonstrate effectiveness without bankrupting our NGO and/or becoming a randomista? – about Oxfam’s attempt to up its game in understanding and demonstrating its effectiveness. Here, I outlined our ambitious plan of ‘randomly selecting and then evaluating, using relatively rigorous methods by NGO standards, 40-ish mature interventions in various thematic areas’. We have dubbed these ‘effectiveness reviews’. Given that most NGOs are currently grappling with how to credibly demonstrate their effectiveness, our ‘global experiment’ has grabbed the attention of some eminent bloggers (see William Savedoff’s post for a recent example). Now I’m back with an update.
Is in danger of being messed up. Here is why: There are two fundamental reasons for doing impact evaluation: learning and judgment. Judgment is simple – thumbs up, thumbs down: program continues or not. Learning is more amorphous – we do impact evaluation to see if a project works, but we try and build in as many ways to understand the results as possible, maybe do a couple of treatment arms so we see what works better than what. In learning evaluations, real failure is a lack of statistical power, more so than the program working or
This is an excerpt from "School Vouchers Can Help Improve Education Systems" published on the Opinions section of the World Innovation Summit for Education (WISE) website.
As the demand for education increases, resources remain scarce. In most countries, the government is both the major financier as well as the provider of education. However, schooling still does not reach all members of society equally.
One way of financing education is to provide families with the funding – via cash transfers to schools based on enrollments or by providing cash to families to purchase schooling – in other words- through vouchers. The objective of a voucher program is to extend the financial support from the government to these other education providers and thus give all parents, regardless of income, the opportunity to choose the school that best suits their preferences.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division. For Part I of this series, see yesterday’s post. Today we focus on DFID’s funding for research and impact evaluation.
As part of a new series looking how institutions are approaching impact evaluation, DI virtually sat down with Nick York, Head of Evaluation and Gail Marzetti, Deputy Head, Research and Evidence Division
I am in the midst of a trip working on impact evaluations in Ghana and Tanzania and these have really brought home the potential and pitfalls of working with program’s monitoring data.
In many evaluations, the promise is significant. In some cases, you can even do the whole impact evaluation with program monitoring data (for example when a specific intervention is tried out with a subset of a program’s clients). However, in most cases a combination of monitoring and survey data is required.
In a New York Times column last Friday David Brooks discussed a book by Jim Manzi, and extolled the idea of randomized field trials as a way for the US to make better policies.
While it’s nice to welcome Citizen Brooks into the fold, there are a couple of points in his article worth exploring a bit.