Syndicate content

What on Earth is ‘Development Impact’ Exactly?

Berk Ozler's picture

Given that we are in a somewhat reflective mood this week due to the fact that it has been a year since we started this blog, I figured I’d highlight some of the comments that we received so far here and share some of my thoughts on these comments and related issues that have been on my mind recently as potential posts…

One of the comments we received was not to change too much and this sounds about right to me. This is not a blog that has a huge following by any means – we’re a niche blog – and we just have not seen or heard anything to suggest making substantial changes. We’ll keep trying to post everyday, which does take discipline, try to maintain a standard that we hope we established, encourage more guest posts, but all in all it is probably reasonable to expect more of the same in the upcoming months.

Another theme was along the lines of posting more on longer-term effects and final outcomes. This is an entirely reasonable request, but constrained by the fact that there are just not that many program evaluations out there that go back to the original program beneficiaries and a reasonable counterfactual group for data collection after a long time. However, they do exist. Perhaps, we need to make more effort to search actively for these evaluations and publish blog posts highlighting the interesting ones. If you’re aware of a rigorous impact evaluation (no, rigorous is not a code word for RCT) that has recently disseminated long term impacts or has gone back to the field, please feel free to let us know and we’ll take a look.

Ranil’s final comment was that he wanted to see more posts that are critical of the papers that are being covered. This is also well taken, but the readers should understand that this is a tough one. It takes a lot of care to write something that is critical while constructive. It becomes essentially a referee report, but one that is public and one that no one asked you to write. Referee reports are effective partly because they don’t hold back on the criticism – while one’s initial reaction to reading such a report as the author of a paper might be negative, eventually responding to the points made in a report usually improve the work. Writing such a review and posting it for everyone to see is a bit nerve-wracking, at least for me...

Ironically, I wrote such a blog last week about the double blind RCT paper that was making the rounds in the blogosphere. In that case, so many people had posted about this paper, pretty much uncritically, and also had been sending emails to us asking what we thought, that a post that reviewed this paper became almost impossible to avoid.

All in all, I would say that this was an isolated incident. If the paper did not get the sensationalist coverage that it did in the days prior to our post, it would not have gotten covered in our blog. And, that is the result of a conscious decision by the four of us made when we were starting this blog. We would rather cover papers that are interesting and useful for our readers and not pick out papers with a variety of flaws and expose them publicly. There are instances when that may become necessary, but these will never become a majority of posts that we do on DI. As for comments being useful for the authors, that is nice, but providing good referee reports as a public good on a regular basis is probably a little more than we can handle at the moment…

A reader wanted more posts on the difficulties of running field experiments. For four guys who run a bunch of field experiments, we write way less about this stuff than we should be. I was just in the field in Malawi and I posted about one of the things that went wrong with the randomization procedures. I had other ideas but just did not get around to writing all of them down. Perhaps, we need to have a corner that appears with regularity that is devoted to issues from the field – whether they pertain to program design, implementation, data collection, or difficulties in the field.

Finally, in the category of “somewhat cranky comments,” we received the following comment from ‘anonymous’:

“If you refer to an article you should provide an ungated link or a copy attached.

And I have not seen anything that talks about the feedback loop or how all your endless studies actually are incorporated into better development programming. Where is that evidence?

By the way, your security system requiring the typing of "2 words" is impossible to read. They are not words but groups of letters. And the sound does not work.”

Thank you for these comments. On the first one, we try our best to provide “gated” and “ungated” versions of papers if they exist, but will try to do even better in the future. On the last one, the CAPTCHA system is an undiscriminating enemy to all – we also suffer from it like our readers if we try to comment or respond to comments while not inside the Bank’s firewall. We know this is annoying but there is nothing we can do about it – short of taking our blog to a platform outside the World Bank’s.

The question in the middle is a good one: studies of impact are usually the impacts of a program on an outcome on the target population. But, the aim is to really have an impact on policy so that outcomes change in the desired direction for the population as a whole. In a sense, Development Impact means both and it has to: the people doing the research are not the same people as people designing development policy at scale. So, who is documenting the impact of our “endless” studies on policy?

I actually get this question a lot: in fact, I get it from every donor that funded one of my studies. And, the answer is really difficult. Sometimes, if I’m lucky, there are follow-up studies in other settings. It is not unusual for a variety of people from Bank operations, other donor organizations, or a government department in a developing country (or even in the US!) to ask me about our findings and for my suggestions on how they might design of an upcoming policy intervention they are contemplating. Sometimes, someone might tell you that your findings in study X made “approach Y” the “next big thing.” Which one(s) of these indicate(s) meaningful impact of our studies? I am not sure.

As a smart reader of our blog pointed out to me recently, how research gets used in making development policy reflects on both the researchers and the policy-makers. Martin Ravallion has written about this recently here. These things take a long time and are somewhat amorphous. It’s like trying to figure out the effect of blogs on policy, which we tried to take a stab at anecdotally in this paper with David McKenzie. Many people said that the impact is likely small. Is that the same for publications? I am not sure. Simply publishing a study likely plays a role; dissemination probably plays a bigger role; and then determined development practitioners who tirelessly try to convince others while also trying to figure out how to operationalize a research finding in the real world play the largest role. Believe it or not, the much-maligned World Bank does a fair bit of that, not in the least by making pots of money available (or raising funds from other donors) towards operationalizing important research findings.

In the end though, I do believe that publishing studies that are pertinent for policy in developing countries and trying to disseminate them through a variety of avenues, including blogs like this one, does have an impact. Often this may be very simply by making policy-makers much more sensitive to the quality of evidence that is coming their way. And that is reason enough to continue to make sure that I have something sensible to write once a week…

 

Comments

Submitted by Lee on
I think on your last point, that Keynes quote is pretty relevant: "Practical men, who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist." Even if individual evaluations, blogs, or published articles do not have large direct policy influence, I think there is no doubt that the standard of acceptable evidence used for policy-making in development has been raised as a direct result of the movement to RCTs over the last decade. Even organisations which don't use RCTs have been forced to at least talk, and thereby think, in terms of impact and counterfactuals, which is a fantastic achievement. Keep up the good work!