Published on Development Impact

Notes from the field: the usefulness of early workshops

This page in:

One of the things I learned from other folks at the Bank I work with is the usefulness of doing a workshop early in the early design of an impact evaluation to bring the project and the impact evaluation team together to hammer out design.   With one of my colleagues, I did one of these during my recent trip to Ethiopia and a bunch of things stuck out.  

First, though, it’s useful to discuss how these are structured.   What I like to do is spend about half a day on three initial presentations.   The first one covers the basics of impact evaluation – and in my experience this has to be pitched in a way that any reasonably smart person can understand it. (Education level is probably a bad variable to use to target this: during one of these workshops, the lead contact on the implementation side approached me and told me that a lot of the group was shop stewards – as in metal working and coffin making.   They did a hell of a lot better than some folks I have seen with masters degrees in understanding not only the need for a control group, but how to construct it).   The second one covers the use of impact evaluation for decision making (e.g. justifying programs, scaling up, experimenting with different versions of the program). The third covers the available impact evaluation evidence and big unanswered questions in the area(s) most relevant to the program at hand. 

The next afternoon and the following morning are spent discussing the design of the impact evaluation. Here we start with a quick review of what the program will do – tracing inputs on out through outputs, outcomes and impacts (for the old school among us, this is akin to working through the log frame).   Then: what are the big questions the evaluation can answer?    I try and keep this as broad as possible to bring the team in and see what kind of things we might do – this expansive list helps get people excited about what we are doing here.   Then talk about specifics – what indicators will we use (this starts to help set the scene for the data collection exercise).   And then the rubber meets the road: what evaluation method are we going to use and how are we going to set this up.   This is where things start to get interesting – new details about how participants will be selected emerge, assumptions of how the program will work get reexamined, and complementary interventions get revealed.    Critically, this is also where the tradeoffs (if any) between the way the program has been conceived and the feasibility of a rigorous evaluation become apparent.   This is also where the questions get weeded down – in the Ethiopia workshop we started with 7 treatment arms, and it quickly became apparent what acrobatics would be needed to do this.

After the discussion of methods (which can easily evolve into a loop back into the main questions and indicators and take some time), it is time to turn to the nitty gritty.      Where will the data come from? Who will do what?   What does the timeline look like? How should we disseminate the outputs (and what form will they take)? And, of course, how will we pay for all of this…In the end, it’s useful to wrap up with a bunch of the big questions to be answered and map out (more like a guess really) how and when they will be answered. 

So who should be at the workshop?   One key is to have someone who can take program decisions.   Her/his buy in will be critical in the long-run, and they can also credibly weigh in on what is feasible as the evaluation questions meet program realities.   It’s also good to have a range of the folks implementing different components or parts of the program present: a) they can make decisions about how to implement their parts and b) they’ll be critical for managing the integration of program implementation and the evaluation when the time comes to roll the program, and the evaluation, out. For example, in our workshop in Ethiopia we had components of both business training and microfinance – and having representatives of both of these elements there was key for understanding how they would work and what we might be able to do.   It’s also useful to have some of the program monitoring and evaluation staff there as well – at the very least to make sure that the impact evaluation links into the M&E of the program – for example to provide baseline results for fine-tuning the intervention or to make sure critical administrative data (e.g. loan amounts, attendance records) are available for the analysis in the evaluation.   Overall, the key criteria is to have enough (but not too many) people for whom the evaluation will be interesting, and for whom it matters. 

The other key parameter is when to do this.   My ideal timing, I think, is when the broad parameters of what the intervention will be has been firmed up, and the team to implement has been identified, but well before any sort of implementation plan has been written down.   I have had cases where the workshop had to be repeated (because the team changed or the project changed – or both).   But better to plant the seeds earlier (with room to try interesting stuff) rather than try to fiddle with a concrete plan that people have spent a long time putting together.  

A couple of other thoughts about why these workshops are useful.   First, this really increases the program team’s ownership of the evaluation – they see why it’s useful and they are excited about the questions we’ll be able to answer.   This is likely to help boost the chances that the evaluation won’t flop – if the team understands the logic of why you are (for example) randomizing in the first place, they’re probably less likely to undo the randomization.  

Second, the workshop gives me a sense for the political economy and policy environment into which the results will come. When is the government going to make big policy decisions in this area?   What can we have done by then?   What kind of questions are going to be key to answer?   Can we do something useful with the baseline? 

Third, I get some clue as to what the problems are likely to be. Is there likely to be push back? What are the parts of implementation that are going to need real careful scrutiny?   (and of course, the big question: is an impact evaluation feasible here or not?)

Finally, and perhaps most importantly for me, the group discussions kick out a bunch of really useful ideas, angles and options that I wouldn’t have come up with on my own.  

Anyhow, that’s one way to approach getting things set up. Any other advice, insights, and experiences would be welcome. 


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000