Published on Development Impact

Adequacy of Reporting in Economics

This page in:
Should the identity of the author affect the interpretation of the existing evidence? You might answer ‘no,’ but it does. And when it does, it may affect the decision of influential people and institutions, such as a multilateral donor organization or, in the following case, a high level panel discussing the post-MDGs development agenda.
I recently came across this paper, titled “Inequality, Poverty, and Antipoverty Transfers” (I find that clicking on papers under ‘My Updates’ on Google Scholar is a good way to keep up with the literature you’re interested in). When I was reading the sub-section on the role of conditions in social assistance programs, the following paragraph, which summarizes the evidence on the marginal effect of conditions in cash transfer programs, caught my attention in particular – unfortunately not in a good way:

There is very little hard evidence on the separate effectiveness of conditions, but there is evidence that programmes with conditions achieve their objectives (42). Consequently, researchers have looked into the details of programme implementation to identify whether ‘natural experiments’ could throw light on this issue. The extension of the Bono de DesarrolloHumano in Ecuador, a human development transfer programme, is interesting because the programme was advertised, to beneficiary households and the general public, as including conditions on schooling and health but in practice the government was not in a position to implement the conditions. A study compared information on schoolingresponses from households who understood there was a relevant condition in the programme and households reporting having no knowledge of conditions. It finds that the belief that conditions were part of the programme did influence positively their schooling decisions (43). The initial introduction of Mexico’s Progresa seems to provide another ‘natural experiment’. Compliance with schooling conditions is monitored through a form which beneficiaries take to school to be filled in, but for administrative reasons a group of beneficiaries were not issued with the forms. A study compares the schooling responses of beneficiary households without forms or knowledge of conditions, and other groups of beneficiary households. It finds that knowledge of conditions seems to have influence schooling decisions at the secondary school level (44).The studies suggest conditions may matter, but it is hard to generalise from these highly specific settings. At any rate, the effects are likely to be small, for example in Mexico school enrolment rates in primary education were above 90 percent before the introduction of Progresa so that the effect of conditions could at best bind on the 10 percent of children not enrolled at school. A point often missed in policy discussions is that it is the marginal, not the average, effect of conditions that indicates their effectiveness.

In parentheses are footnotes, and footnote 42 reads as follows: "FISZBEIN, A. & SCHADY, N., Conditional Cash Transfers. Reducing Present and Future Poverty, Washington DC, The World Bank, 2009.Short-term experiments with conditions suggest they can be effective in this context. See BAIRD, S., MCINTOSH, C. & ÖZLER, B., Cash or condition? Evidence from a cash experiment, Quarterly Journal of Economics, 126, 1709-1753, 2011."

OK, where to begin here? The author is summarizing the evidence on the marginal effect of conditions over and above the income effects of cash transfers. Until recently, it is true that the evidence he summarizes was the best evidence we had on this question. However, we now have two experiments from two different countries addressed to answer exactly this question: one is a paper by Akresh et al. from Burkina Faso and the other one is this paper by Baird et al. The author chose not to cite one and demoted the other to a footnote. What evidence deserves a long paragraph over the findings from randomized experiments designed to address this question? Apparently, natural experiments that take advantage of imperfections in program implementation. I am not saying that there are no circumstances under which you’d prefer quasi- or non-experimental evidence over RCTs, but this is not one of those cases – not by a long shot. And, even if you thought it was, you’d need to explain the criteria by which you elevate some papers above others – for example using risk of bias criteria as is common in meta-analyses.
The author starts the paragraph by telling us that hard evidence on this question is ‘very little.’ I guess it is a good thing for our field that two RCTs, two natural experiments, and two papers based on structural models and micro-simulations (all of which, by the way, give more or less the same answer – that conditions are important in improving school participation) constitute very little evidence for a single policy question. Then, the author further diminishes the importance of this very little ‘hard’ evidence by stating in the footnote that ‘short-term experiments’ suggest that conditions can be effective. Short-term here seemingly serves to hurt the external validity of these studies, but guess what? While the two RCTs report program effects after two years, the two natural experiments the author focuses on report effects after 18 months. Next, the author continues the discussion by stating that ‘…conditions may matter, but it is hard to generalize from these highly specific settings.’ First, even the evidence he chooses to review suggests that conditions do matter. Second, apparently it is hard to generalize from the experiences of three different countries two of which are national programs. We all complain about external validity and warn against cookie cutter approaches of policy-making in general, but it is possible to take this concern too far: at the extreme, we cannot learn anything from any study, because what happened today may not be relevant for what will happen tomorrow…(the programs that are cited as ‘highly specific’ are national scaled-up programs in Ecuador and Mexico, the latter of which serves as a host for policymakers from other developing countries who visit it to learn about PROGRESA and Oportunidades.
The Malawi study that is relegated to a footnote states the following on cost-effectiveness in the concluding section: "Not only is school enrollment significantly improved in the CCT arm over the UCT arm, but the evidence presented shows that CCTs are more cost-effective in raising enrollment than UCTs in this context. To achieve the same enrollment gain obtained from a $5/month total transfer in the CCT arm, a transfer of more than $10 to the parents in the UCT arm is needed. This difference is much larger than the additional cost of administering a CCT program—possibly by an order of magnitude." This is one data point that provides an answer to one of the author’s main concerns about cash transfers, namely the cost-effectiveness of attaching conditions to cash transfers, but is ignored.
It would have been quite easy for the author to cite the entirety of the extant literature on this question and state that they all, more or less, point in the same direction:attaching conditions for children to attend school improve schooling outcomes over and above the effect of unconditional cash transfers (If he was interested in reporting effects on other outcomes, he could also mention that UCTs may outperform CCTs on other important outcomes of interest – which could support an earlier point about CCTs potentially penalizing households least able to comply with conditions). Other concerns about CCTs, such as conditions potentially undermining the social protection aim of cash transfer programs or being necessary for political reasons, would have remained equally valid. But, there are not a lot of people who would summarize the existing evidence in this way.
So, why did the author do it this way? I don’t know, but reading the section gives the impression that the author is dubious about the value of conditions in cash transfer programs, lists a series of problems with them, and is not particularly interested in giving much weight to their potential benefits – benefits that are supported by theory and evidence. What gives me pause is that this is in a background paper prepared for the high level panel on the post-2015 development agenda.
Perhaps we need some reporting guidelines in economics papers that we write. Chris Blattman linked to a paper on the adequacy of reporting in economics RCTs, which I’ll discuss in two weeks…
 
 
 

Authors

Berk Özler

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000