Syndicate content

Misadventures in Photographing Impact

David McKenzie's picture

One of my favorite papers to present is my paper on improving management in India, in part because we have wonderful photos to illustrate what bad management looks like and what improved practices look like (see the appendix to the paper for some of these).  Photographing impact isn’t only useful for presentations and glossy summaries, but may potentially offer a new form of data. However, this is easier said than done, and today I thought I’d share some misadventures in trying to photograph impacts on small firms.

The context is an experiment I did with Marcel Fafchamps, Simon Quinn and Chris Woodruff in Ghana, in which we randomly gave grants of 150 cedi ($120) to microenterprise owners. The research paper is here. These microenterprises typically do not keep records, and obtaining accurate information on business outcomes requires carefully designed interviewing. This survey was all collected using PDAs, which had cameras built into them. We therefore had the idea of asking the interviewers to photograph the firm’s inventories and equipment each survey round, with the hope that this could then later be used to physically observe whether firms were expanding or shrinking over the course of our study. We also had in the back of our minds the idea that it might be possible to get an independent expert in these small businesses to look at the photographs (blind to treatment status) and potentially value the inventories and equipment each round, or at least assess whether they appear to have grown or not.

However, the photographs didn’t turn out to be useful for this purpose at all. One issue was that interviews sometimes took place at the home rather than the business, in which case it wasn’t possible to photograph the business at all. But even when photos were taken, we ended up getting pictures like the following:

 

We see some of the issues which arose were:

·         Interviewers not standing in the same spot each time

·         Strong sunlight making it difficult to photograph in some firms

·         Difficulties fitting all the inventories and equipment into a single photograph

·         Inexperience of interviewers using digital cameras.

From a practical point of view, one of the issues was that when surveying a lot of firms, the resulting file containing all the photos becomes large – approximately 300 MB per survey round I think. This meant that it was too big to email, and given some uploading difficulties, meant we only saw a few photos early on. We paid some attention to this issue in training, but obviously not enough, and photos weren’t systematically checked each survey round.

So in thinking about doing this in the future, here are some thoughts:

·         Most of our treatments require large samples to see effects – if we are talking about something reasonably subtle like a 10-20% increase in average profits, we might not be able to see this in photographs for most firms (although of course it is likely there will be some firms which grow a lot in both treatment and control groups). Photographing impact may work better for interventions changing binary outcomes (like buying a new machine, or a new cow – but then measurement is less problematic for such outcomes anyway).

·         Much more attention to logistics and training needs to be done than we did here

·         In particular, thinking of good ways to ensure that photos are taken from the exact same position each time is needed. My RA, Matt Groh, had the good suggestion recently of getting a PDA to pull up automatically the photograph taken in the previous survey round, so the interviewer can be prompted to try and take the same photo.

·          Then of course if we can get the photographs to succeed, we are still in the infancy of attempts to be able to use these photos.

Anyone else got experiences to share on doing this?

Comments

Not quite the same, given we did not have a real experiment, but for the 2008 Economic and Social Progress Report of the IADB, where the topic was social exclusion, we integrated videos as case studies. The main goal was to have cases coming directly from the people that suffer the exclusions, so we did an open micro-documentary contest in Latin-America, and received around 120 films from all Latin-american and Caribbean countries. The winners can be watched here: http://www.iadb.org/res/ipes/2008/videos.cfm?language=en Since then, I have always been thinking about how to integrate audio-visuals in impact evaluations, particularly how to really have videos or pictures as an unbiased tool for evaluation, instead of a (severely biased) "soap opera". I wonder if having short videos, with pre-defined characteristics will work better than pictures in setting as the one you describe here, in terms of given a more general overview of the situation. It would be great to hear from someone that had used videos not just as a complement to show the setting of a project, but as part of the data for the evaluation itself.

Submitted by Chrisotpher Nelson on
Just a brief comment based on some similar work we did with an IFAD agricultural project in Mozambique. This was a mixed methods study, but interestingly our use of cameras also failed as we only found out through extended interveiws that business owners carefully adjusted their stock based on seasonal factors (harvest cash flows) and weather conditions (ideal storage environments). This meant stock was regularly moved between various places (home and shop) and would vary considerably at different times of the year (very little food after harvest and more expensive infrastructure items) which were then replaced with food items as food stocks fell. This made our photos lovely for our publication, but useless as an inventory tool.

I recently got a pocket camcorder with an inbuilt usb drive (making uploading easy). It also comes with software (which can rectify some of the issues with bad photos) and can take recorded footage and still photos. Perhaps you could try using one of these? They take very minimal training and come inbuilt with plenty of memory, plus, they are linked to sites like Youtube and Flickr so filesharing/sending isn't so much of an issue.