Syndicate content

Evaluating Impact: The Jury's Still Out

Shanthi Kalathil's picture

Photo Credit: Flickr User ReRodIn reading Tom's excellent post on CIMA's new report on independent media development efforts, I was struck yet again by how little we know about the impact of media development assistance - and how little we know about what we know. For instance, it's commonly held that donors need to be able to understand the impact of their assistance, to make sure their dollars are being spent wisely and in the right place. But how should we determine this?

One way to do so is through individual assessments of the programs in question. Donors often commission qualitative assessments of the programs they fund. Yet each of these assessments measures something different - quantative outputs (journalists trained, for instance, or seminars held), qualitative impact (how did the program contribute to a stronger media sector?), or broader impact (how did this program strengthen democratic governance in the country in question?). There is no common standard for assessing program success, so combing through individual assessments to define "positive impact" is a bit like comparing apples and oranges.

Another way to think about impact is to use standardized indices - for instance, did the country's Freedom House Freedom of the Press score improve after the assistance program? But this path is also fraught with difficulties, not least of which is attributing the move in scores to a specific donor program or programs. Usually, these indices are such broad measures of media development or press freedom in country, it is extremely difficult (if not impossible) to extrapolate down to the individual program level.

I've been thinking about these issues for a short piece I'm writing on media development metrics, and what I'm finding is that despite catalyzing some of the most lively and contentious discussions at numerous conferences on media development, the issue is still less than clear. Some of those in the field simply throw up their hands when confronted with impact assessment, preferring a heavier emphasis on simply monitoring the program closely and making adjustments as necessary. But measuring impact is not solely an academic pursuit, or something dreamed up by the bean-counters to make sure money isn't misspent. At its heart, it can ensure that the people whom the assistance is meant to benefit are really being served - and aren't investing their own time and energy in donor-driven pursuits that have already been proven unsuccessful. We shouldn't forget that improving our understanding of metrics can and should have tangible positive impacts on those with the most at stake.

Photo Credit: Flickr User ReRod


Submitted by Ken Armstrong on
Hi Shanthi, Sorry... way off-topic: Always hoped, especially when you were overseas, that you would do some investigative reporting on the coming crash of our worldwide economic systems, and the crimes and intrigue behind it...

Add new comment