Syndicate content

Democracy isn't dead

Markus Goldstein's picture

At least not in Benin.   This week, I take a look at interesting paper by Leonard Wantchekon documenting an experiment he did in Benin with this year’s presidential election.   In this paper, Leonard compares the results from a deliberative sharing of a candidate’s platform in a local town hall against a one-way communication of the candidate (or his broker) with a big rally.  

How does he do it?   Leonard has an interesting history, and this gives him the unique ability to try things with national party politics. In this instance, he is trying to get some insight into how to reduce clientelism. So, working with the campaign managers of three top contenders for president, and taking a pretty much nationwide sample, villages are randomized into treatment and control.   Treatment in this case is the town hall model: a research assistant and a campaign representative organize two town hall meetings -- one on education and health and a second on rural infrastructure and employment.   Villagers debate policy proposals and these results are then transmitted on up the campaign hierarchy.   In the control areas, there are 2-3 rallies, organized by a local political broker (e.g. an MP, or a local mayor) where either the candidate or the broker makes a speech covering the policies of the candidate.   “There was no debate, but instead a festive atmosphere of celebration with drinks, music, and sometimes cash and gadget distribution.” Not surprisingly, a lot more folks come to these.  

What was surprising to me is the relative cost.   Rallies cost about $15 per participant while town halls cost about $2. And about 40% of the rally cost is a direct transfer to the broker (it’s also interesting to see that, over the course of the election, cash and gifts to voters appear to be evenly distributed in the end across both treatment and control – so direct payments to voters isn’t going to be driving things).   One final wrinkle on the treatment – each of the treatment villages are also paired up with the incumbent or one of his two challengers to allow us to look at effects by candidate. 

So what does he find?   Turnout is higher in treatment areas, by around 5%. So this is the big headline: it’s just not efficient (from a getting votes out point of view) to do these big, expensive rallies.   Stick to the town halls, and have a good discussion.  

Now, Leonard and his team collected two types of data: village electoral returns and post-election surveys.   These two data sources give somewhat different pictures of the effects.   For voter turnout, while they both show an impact on overall levels, the electoral results suggest that turnout was only significantly higher for the challengers, not the incumbent. For the individual survey data, the results hold for both treatment and incumbent.   There is also a difference in whether or not the treatment leads to higher electoral results for the treated candidate.   In the village level data, there is no effect.   But for the individual survey data, the effect of the town hall approach was to boost votes by 16 percent for the treated candidates – and this appears to be driven by a boost for the opposition, not the incumbent. So from a methodological point of view, it would be interesting to understand more about why these measures give us different results – but the paper doesn’t discuss this (in its current version).

Oh, and the election results?   The incumbent won by a significant margin.   And the former IMF official came in third.    `

 

P.S.   Many thanks for the interesting comments on last week’s data access post.   Please keep those coming, and I hope to do a follow up post in awhile.  

Comments

Submitted by tabeera on
Dear Sir, I am new into research field. what are the books/papers i should read to learn more on impact evaluations and research methodlogies. thanks, Tabeera Phd student University of Delhi

Submitted by Scott Bayley on
Asian Development Bank, 2006, Impact Evaluation: Methodological and Operational Issues. (a brief introduction, interesting discussion of common objections to impact studies) Barlow & Hersen, 1989, Single Case Experimental Designs, Pergamon. (an under utilized approach in my opinion) Bamberger, et al 2006, RealWorld Evaluation, Sage. (an excellent overview of how to undertake evaluations of development programs while facing various types of constraints, also includes a discussion of the most commonly used designs for evaluating the impact of development programs) Becker, 2000, Discussion Notes: Causality, http:/web.uccs.edu/lbecker/Psy590/cause.htm (a brief summary of different philosophical perspectives on causality) Boruch, 2005, Randomized Experiments for planning and evaluation: A practical guide, Sage. (a good introduction to the topic) Brady, 2002, Models of Causal Inference: Going Beyond the Neyman-Rubin-Holland Theory, Paper Presented at the Annual Meetings of the Political Methodology Group, University of Washington, Seattle, Washington. (paper reviews four of the more common theories of causality) Brinkerhoff, 1991, Improving Development Program Performance: Guidelines for Managers, Lynne Rienner. (includes a discussion of the most common causes of performance problems in development programs) Campbell & Stanley, 1963, Experimental and Quasi-experimental Designs for Research, Rand McNally. (the all-time classic text, discusses the strengths and weaknesses of various research designs for assessing program impacts) Cook & Campbell, 1979, Quasi-experimentation, Houghton Mifflin. (excellent, contains a useful review of different theories of causality and how to test for causal relationships as well as the application of quasi-experiments for impact evaluations) Cook, Shadish & Wong, 2008, 'Three Conditions under Which Experiments and Observational Studies Produce Comparable Causal Estimates: New Findings from Within-Study Comparisons', Journal of Policy Analysis and Management, 27, 4, 724-750. (this reference recommends regression-discontinuity designs, matching geographically local groups on pre-treatment outcome measures, and modeling a known selection process) Cracknell, Basil Edward. 2000. Evaluating Development Aid: Issue, Problems and Solutions. Sage Publications, New Delhi. (interesting discussions of the issue of evaluating for learning vs evaluating for accountability and the politics of evaluation) Davidson, E. J. 2004, Evaluation methodology basics: The nuts and bolts of sound evaluation, Sage. (suggests 8 techniques for causal inference) Davis, 1985, The Logic of Causal Order, Sage. (worth a quick read) Donaldson, Christie & Mark, 2009, What Counts as Credible Evidence in Applied Research and Evaluation Practice?, Sage. (offers a range of perspectives) European Evaluation Society, 2007, Statement: The Importance of a Methodologically Diverse Approach to Impact Evaluation. Glazerman, Levy & Myers, 2002, Nonexperimental Replications of Social Experiments: A Systematic Review, Mathmatica Policy Research Inc. (this research paper concludes that more often than not, statistical models do a poor job of estimating program impacts). This report is available free on the net at: http://www.mathematica-mpr.com/publications/PDFs/nonexperimentalreps.pdf Gilovich, 1991, How we know what isn’t so: The fallibility of human reason in everyday life, Free Press, New York. Guba and Lincoln, 1989, Fourth Generation Evaluation, Sage. (the authors argue that 'cause and effect' do not exist except by imputation, a constructivist perspective)) Hatry, 1986, Practical Program Evaluation for State and Local Governments, Urban Institute Press. (a good introduction, includes a review of the circumstances in which it is feasible to use experimental designs) Holland, P. 1986. 'Statistics and Causal Inference'. Journal of the American Statistical Association. Vol. 81 (945-960). Judd & Kenny, 1981, Estimating the Effects of Social Interventions, Cambridge. (heavy emphasis on statistical applications, for the enthusiast) Kenny, 2004, Correlation and causality, (a very technical book about analysing causal impacts using statistical models). This book is available free on the net at: http://davidakenny.net/cm/cc.htm Langbein, 1980, Discovering Whether Programs Work, Goodyear. (good but technical) Mark & Reichardt, 2004, ‘Quasi-experimental and correlational designs: Methods for the real world when random assignment isn’t feasible’. In Sansone, Morf and Panter, (eds), Handbook of methods in social psychology, (pp. 265-286), Sage. (useful introductory overview) Mayne, J. 2008, Contribution analysis: An approach to exploring cause and effect, ILAC Brief 16. (an approach that is increasingly popular based on using program theory and shares the same strengths/weaknesses) McMillan, 2007, Randomized Field Trials and Internal Validity: Not So Fast My Friend, (good overview of the limitations). Available free on the net at: http://pareonline.net/pdf/v12n15.pdf Miles and Huberman, 1994, Qualitative data analysis, Sage. (contains examples of undertaking causal analysis with qualitative data) Mohr, 1995, Impact Analysis for Program Evaluation, Sage. (an advanced discussion of research designs and impact analysis) Network of Networks on Impact Evaluation, 2009, Impact Evaluations and Development – NONIE Guidance on Impact Evaluations. (a review of the methods commonly used by development agencies). Available free on the net at: http://www.worldbank.org/ieg/nonie/guidance.html Nisbett and Ross, 1985, Human Inference: Strategies and Shortcomings of Social Judgments, Prentice-Hall. (explains why people struggle to accurately perceive causal relationships) Perrin, Burt. 1998. 'Effective Use and Misuse of Performance Measurement'. American Journal of Evaluation. Vol. 19 (1):367-379. (excellent) Posavac & Carey, 2002, Program Evaluation: Methods and Case Studies, Prentice Hall. (good all round text, includes a summary of the types of evaluation questions that can be answered by particular research designs) Reynolds & West 1987, 'A multiplist strategy for strengthening nonequivalent control group designs', Evaluation Review, 11, 6, 691-714. (an excellent example of how to fix up a weak research design by adding additional features thereby improving your assessment of the program's impact) Rogers et al 2000, Program Theory in Evaluation: Challenges and Opportunities, No. 87. (a series of papers on the strengths and weaknesses of using program theory to assist with causal analysis) Roodman, 2008, Through the Looking Glass, and What OLS Found There: On Growth, Foreign Aid, and Reverse Causality, Working Paper 137, Center for Global Development. Rossi, Lipsey & Freeman 2003, Evaluation – A Systematic Approach, Sage. (recommended, includes an excellent discussion of different types of research designs and when to use each of them) Rothman & Greenland, 2005, 'Causation and Causal Inference in Epidemiology', American Journal of Public Health, Vol 95, No. S1. Reichardt, 2000, ‘A typology of strategies for ruling out threats to validity’. In Bickman (ed) Research Design: Donald Campbell's’ legacy, Sage. Shadish, Cook and Campbell, 2002, Experimental and Quasi-Experimental Designs for Generalized Causal Inference, Houghton Mifflin. (advanced text) Shadish, Cook and Leviton, 1991, Foundations of Program Evaluation, Sage. (the final chapter contains an excellent summary of evaluation theory in relation to program design, evaluation practice, and theory of use) Spector, 1981, Research Designs, Sage. (a basic introduction) Stame, Nicoletta. 2010, ‘What Doesn’t Work? Three Failures, Many Answers’, Evaluation, 16, 4, 371-387. (useful review of current debates on impact evaluation methodology) Trochim, 1984, Research designs for program evaluation: The regression discontinuity approach, Sage. (excellent method for evaluating impacts where entry into the program depends upon meeting a numerical eligibility criterion, e.g. income less than X, academic grades more than Y) Trochim, 1989. 'Outcome Pattern Matching and Program Theory'. Evaluation and Program Planning. Vol. 12:355-366. World Bank (Independent Evaluation Group) 2006, Conducting Quality Impact Evaluations Under Budget, Time and Data Constraints, author. (this text is a highly summarized version of Bamberger’s book). It is available free on the net at: http://www.worldbank.org/ieg/ecd/conduct_qual_impact_eval.html World Bank (Independent Evaluation Group) no date, Impact Evaluation- The Experience of the Independent Evaluation Group of the World Bank, author. It is available free on the net at: http://lnweb18.worldbank.org/oed/oeddoclib.nsf/DocUNIDViewForJavaSearch/35BC420995BF58F8852571E00068C6BD/$file/impact_evaluation.pdf?bcsi_scan_D4A612CF62FE9576=0&bcsi_scan_filename=impact_evaluation.pdf Yin, 2000, 'Rival Explanations as an Alternative to Reforms as Experiments', in Bickman (ed) Validity and Social Experimentation, Sage. (good review of how to identify and test rival explanations when evaluating reforms or complex social change) Yin, 2003, Applications of Case Study Research, Sage. (very good reference, includes advice on undertaking causal analysis using case studies) cheers Scott Bayley