Recently I’ve done more than my usual amount of reviewing of grant proposals for impact evaluation work – both for World Bank research funds and for several outside funders. Many of these have been very good, but I’ve noticed a number of common issues which have cropped up in reviewing a number of them – so thought I’d share some pet peeves/tips/suggestions for people preparing these types of proposals.
First, let me note that writing these types of proposals is a skill that takes work and practice. One thing I lacked as an assistant professor at Stanford was experienced senior colleagues to encourage and give advice on grant proposals- and it took a few attempts before I was successful in getting funding. Here are some things I find a lot of proposals lack:
· Sufficient detail about the intervention – details matter both for understanding whether this is an impact evaluation that is likely to be of broader interest, as well as for understanding what the right outcomes to be measuring are and what the likely channels of influence are. So don’t just say you are evaluating a cash transfer program – I want to know what the eligibility criteria are, what the payment levels are, the duration of the program, etc.
· Clearly stating the main equations to be estimated – including what the main outcomes are, and what your key hypotheses are.
· Sufficient detail about measurement of key outcomes – especially true if your outcomes are indices or outcomes where multiple alternate measures are possible. E.g. if female empowerment is an outcome, you need to tell us how this will be measured. If you want to look at treatment heterogeneity by risk aversion, how will you measure this?
· How will you know why it hasn’t worked if it doesn’t work – a.k.a. spelling out mechanisms and a means to test them – e.g. if you are looking at a business training program, you might not find an effect because i) people don’t attend; ii) they attend but don’t learn anything; iii) they learn material but then don’t implement it in their businesses; iv) they implement practices in their businesses but implementing these practices has no effect; etc. While we all hope our interventions have big detectable effects, we also want impact evaluations to be able to explain why it didn’t work if somehow there is no effect.
· Discussion of timing of follow-up: are you planning multiple follow-up rounds? If only one round, why did you choose one year as the follow-up survey date – is this really the most important follow-up period of interest for policy and theory?
· Discuss what you expect survey response rates to be, and what you will do about attrition. Do you have evidence from other similar surveys of what likely response rates are like? Do you have some administrative data you can use to provide more details on attritors, or will you be using a variety of different survey techniques to reduce attrition? If so, what will these be?
· Power calculations: it is not enough to say “power calculations suggest a sample of 800 in each group will be sufficient” – you should provide sufficient detail on assumed means and standard deviations, assumed autocorrelations (and for cluster trials, intra-cluster correlations) that a reviewer should be able to replicate these power calculations and test their sensitivity to different assumptions.
· A detailed budget narrative: Don’t just say survey costs are $200,000, travel is $25,000. Price out flights etc, describe the per survey costs and explain why this budget is reasonable.
· Tell the reviewers why it is likely you will succeed. There is a lot to do to successfully pull off all the steps in a successful impact evaluation, and even if researchers do everything they can, it is inherently a risky business trying to evaluate policies that are subject to so many external forces. So researchers who have a track record of taking previous impact evaluations through to completion and publication should make clear this experience. But if this is your first impact evaluation, you need to provide some detail for the reviewers as to what makes it likely you will succeed – have you previously done fieldwork as an RA? Have you attended some course or clinic to help you design an evaluation? Do you have some senior mentors attached to your project? Are you asking first for money for a small pilot to prove you can at least carry out some key step? Make the case that you know what you are doing. Note this shouldn’t just be lines on a C.V., but some description in the proposal itself of the qualifications of your team.
Note I haven’t commented above about links to policy or explicit tests of theory. Obviously you should discuss both, but depending on the funder and their interests, one or the other becomes relatively more important. One concern I have with several grant agencies is how they view policy impact – I’m sympathetic to the view that what may be most useful for informing policy in many cases is not to test the policies themselves but to test underlying mechanisms behind their policies (see a post on this here). So this might involve funding researchers to conduct interventions that are never themselves going to be implemented as policies, but which tell us a lot about how certain policies might or might not work. I think such studies should be scored just as strongly on policy criteria as some studies which look at explicit programs.
For those interested in seeing some examples of good proposals, 3ie has several successful proposals in the right menu here. Anyone else got any pet-peeves they come across when reviewing proposals, or must dos? Those on the grant preparation side, any questions or puzzles you would like to see if our readership has answers for?
Join the Conversation