Tips for writing Impact Evaluation Grant Proposals
This page in:
Recently I’ve done more than my usual amount of reviewing of grant proposals for impact evaluation work – both for World Bank research funds and for several outside funders. Many of these have been very good, but I’ve noticed a number of common issues which have cropped up in reviewing a number of them – so thought I’d share some pet peeves/tips/suggestions for people preparing these types of proposals.
First, let me note that writing these types of proposals is a skill that takes work and practice. One thing I lacked as an assistant professor at Stanford was experienced senior colleagues to encourage and give advice on grant proposals- and it took a few attempts before I was successful in getting funding. Here are some things I find a lot of proposals lack:
· Sufficient detail about the intervention – details matter both for understanding whether this is an impact evaluation that is likely to be of broader interest, as well as for understanding what the right outcomes to be measuring are and what the likely channels of influence are. So don’t just say you are evaluating a cash transfer program – I want to know what the eligibility criteria are, what the payment levels are, the duration of the program, etc.
· Clearly stating the main equations to be estimated – including what the main outcomes are, and what your key hypotheses are.
· Sufficient detail about measurement of key outcomes – especially true if your outcomes are indices or outcomes where multiple alternate measures are possible. E.g. if female empowerment is an outcome, you need to tell us how this will be measured. If you want to look at treatment heterogeneity by risk aversion, how will you measure this?
· How will you know why it hasn’t worked if it doesn’t work – a.k.a. spelling out mechanisms and a means to test them – e.g. if you are looking at a business training program, you might not find an effect because i) people don’t attend; ii) they attend but don’t learn anything; iii) they learn material but then don’t implement it in their businesses; iv) they implement practices in their businesses but implementing these practices has no effect; etc. While we all hope our interventions have big detectable effects, we also want impact evaluations to be able to explain why it didn’t work if somehow there is no effect.
· Discussion of timing of follow-up: are you planning multiple follow-up rounds? If only one round, why did you choose one year as the follow-up survey date – is this really the most important follow-up period of interest for policy and theory?
· Discuss what you expect survey response rates to be, and what you will do about attrition. Do you have evidence from other similar surveys of what likely response rates are like? Do you have some administrative data you can use to provide more details on attritors, or will you be using a variety of different survey techniques to reduce attrition? If so, what will these be?
· Power calculations: it is not enough to say “power calculations suggest a sample of 800 in each group will be sufficient” – you should provide sufficient detail on assumed means and standard deviations, assumed autocorrelations (and for cluster trials, intra-cluster correlations) that a reviewer should be able to replicate these power calculations and test their sensitivity to different assumptions.
· A detailed budget narrative: Don’t just say survey costs are $200,000, travel is $25,000. Price out flights etc, describe the per survey costs and explain why this budget is reasonable.
· Tell the reviewers why it is likely you will succeed. There is a lot to do to successfully pull off all the steps in a successful impact evaluation, and even if researchers do everything they can, it is inherently a risky business trying to evaluate policies that are subject to so many external forces. So researchers who have a track record of taking previous impact evaluations through to completion and publication should make clear this experience. But if this is your first impact evaluation, you need to provide some detail for the reviewers as to what makes it likely you will succeed – have you previously done fieldwork as an RA? Have you attended some course or clinic to help you design an evaluation? Do you have some senior mentors attached to your project? Are you asking first for money for a small pilot to prove you can at least carry out some key step? Make the case that you know what you are doing. Note this shouldn’t just be lines on a C.V., but some description in the proposal itself of the qualifications of your team.
Note I haven’t commented above about links to policy or explicit tests of theory. Obviously you should discuss both, but depending on the funder and their interests, one or the other becomes relatively more important. One concern I have with several grant agencies is how they view policy impact – I’m sympathetic to the view that what may be most useful for informing policy in many cases is not to test the policies themselves but to test underlying mechanisms behind their policies (see a post on this here). So this might involve funding researchers to conduct interventions that are never themselves going to be implemented as policies, but which tell us a lot about how certain policies might or might not work. I think such studies should be scored just as strongly on policy criteria as some studies which look at explicit programs.
For those interested in seeing some examples of good proposals, 3ie has several successful proposals in the right menu here. Anyone else got any pet-peeves they come across when reviewing proposals, or must dos? Those on the grant preparation side, any questions or puzzles you would like to see if our readership has answers for?
I presume that given WB's breadth of experience that it would be easy to whip up a spreadsheet with the primary framework and inputs for costing out an impact evaluation. I'd love to see a spreadsheet that allows you to enter in some key inputs and have it calculate the estimated cost of the evaluation. (Obviously some will be location-specific, e.g. labor costs, and others length of survey specific, etc). If you know of a costing model I'd love to see it, and I'd be happy to help in its development if it doesn't exist!
I don't know of anything which does this. Part of this is likely because the costs of doing surveys vary so much depending on geographic spread, target population, and survey length. But I agree something like this would be a useful starting point in some cases. If anyone knows of anything like this, let us know.
This post is great! I too have been reviewing a lot of proposals lately. I agree with everything you said and will reiterate the importance of clearly presenting power calculations. An evaluation can be beautifully designed, but if I don't think that there will be enough power, I will downgrade the proposal substantially. I feel pretty uncomfortable suggesting that donors spend a lot of money to collect quantitative data if achieving statistical significance is not a probability.
I agree that applicants should include information on assumed means and standard deviations, intra-cluster correlations, etc. and will add that assumed take-up rates should be explicitly stated and justified. Depending on the intervention, take-up may be quite low and could be detrimental to statistical power.
Good advice. Although I agree that all these points are critical to successful quant impact evaluations, I can't help thinking that you're asking for too much at the grant *proposal* stage. Basically you're asking every team to spend weeks preparing the complete evaluation design before they know whether they will receive funding. When evaluating proposals I think the focus should be on the overall strategic importance of the evaluation, i.e. is it an intervention that hasn't be evaluated? Are there other similar programs that would benefit from this evaluation? Is it evaluating a unique mechanism? Of course perhaps the grant selection processes you were involved with had a pre-selection process where these decisions were made. I also can't help thinking that you're letting methodology drive your funding decision making. In my view well done qualitative research can be just as informative as (well done) quant research--particularly when the causal mechanisms are poorly understood.