Syndicate content

Is the “conditional” in CCTs just a monitoring technology? Evidence from Brazil

David McKenzie's picture

The typical arguments made for the conditioning argument of CCTs are usually based on paternalism (people might have incorrect beliefs about the value of education, or parents may have incomplete altruism for their kids), externalities (the social returns to education exceed the private returns so individuals underinvest),   or political economy (it is easier to sell transfers to the voters if you make them conditional). A paper by Leonardo Bursztyn and Lucas Coffman, forthcoming in the Journal of Political Economy, offers a new rationale – arguing that parents prefer CCTs to unconditional transfers because the conditioning enables them to monitor school attendance.

The authors conducted an experiment with 210 families who were already enrolled and benefitting from a CCT program in Brazil (Bolsa Escola Vida Melhor), and who had a child aged 13 to 15 in the household. At the time of the program the families were receiving R$120 per month conditional on school attendance of a child in at least 85% of classes each month.

A survey was taken of the parents, in which the surveyor offered the parent the opportunity to change to a new cash transfer program. There were 25 questions, where each was a choice between a cash transfer conditional on attendance (the current program), and an unconditional transfer, which would also be paid monthly in the same manner to the same parent. E.g. Would you prefer R$120 conditional on attendance, or R$125 unconditionally.

Parents were informed that 5% of participants would have one of their decisions implemented, and that that decision would be randomly chosen from the 25 questions (yes, the local government was apparently willing to change 10 families’ programs for 4 months). They were also informed that at the end of the experiment, their child would be made aware of the choices made by the parent.

There were then two treatments (along with the control group of 61 families which got the above):

·         Text message treatment: A treatment group of 51 families was asked prior to the CCT vs unconditional questions whether they would like to receive a free text message sent to their cellphone each day their child misses school. (49/51 took up this offer, the other 2 didn’t have cellphones).

·         Don’t tell treatment: A treatment group of 47 families, where instead of telling the parents that the child would be told, parents were told that their child would not be told if their transfer program was changed – so parents could still allow the child to believe the transfers were conditional on attendance.

Results

Parents are willing to pay to keep the conditionality: 82% of parents were willing to pay (by turning down a larger unconditional transfer amount) to keep the conditionality – average (censored) willingness to pay was R$37, which was 6% of household pre-CCT monthly income and almost one-third of the CCT amount.

Once parents were offered costless monitoring, demand for conditioning fell dramatically: Only 34% of the text message group was willing to pay for the condition.

Willingness to pay for conditioning also fell under the “don’t tell” treatmentsuggesting that the willingness to pay is primarily about monitoring their child’s behavior. The magnitude of the effect is similar in size to the text message treatment.

Discussion

This is a clever little experiment, which suggests that in cases where parents find it hard to monitor attendance (only 7% of kids report travelling to school with their parents), CCTs might operate by providing information to parents on child attendance behavior, which is valuable to the parents. This channel is presumably more important for older kids, where preferences for schooling may diverge more between parent and child. The sample is pretty small in each treatment, so it would be interesting to see this replicated in other countries and programs.

One issue I have with this (and many other studies which do something similar), is the description of “real stakes” for a situation in which only 5% of households actually get awarded something, and even then, only one of their 25 questions is real. So the chance of any given question actually being played for real is only 1/20 x 1/25 = 1/500. Given the gains from switching were approximately R30 for 4 months, the expected value is something like 0.05 x 120 x 0.5 = $R3. (the 0.5 arises because half the cases asked whether you would prefer a larger CCT vs a smaller unconditional transfer). So the expected value is only 2.5% of one month’s transfer, which may not qualify as “real money”.  I see this similar issue being done at times for measuring risk aversion, where people get asked 20 choices between lotteries, and then 1/10 people get to have one of their lotteries played for real. It seems likely that some chance of it being for real is better than zero chance, but I am not sure we would get the same results from such cases as would occur when every choice is for real money. Anyone know of a paper which tests this?

 

Comments

Submitted by Sheheryar Banuri on
Hi David, The closest paper that comes to mind is the following piece by Brandts-Charness on the strategy method (i.e. individuals making decision contingent choices) vs. the direct response method (i.e. individuals making choices after learning the earlier outcome). While it doesn't directly respond to your query on the stakes issue, it does provide some evidence towards the value of the strategy method, and outlines the factors contributing towards differences. Link: http://digital.csic.es/bitstream/10261/45296/1/10.1007-s10683-011-9272-x.pdf I'm all for a direct test of this, but I would caution against eliciting multiple choices and paying for each (I've refereed a few papers that have done this). This leads to what behavioralists term the "portfolio effect" where each decision is no longer independent of earlier decisions. This means that the 25 questions should be asked separately leading to an expensive data collection effort. I don't particularly care for their design choice of 5% of households, and (like you) am skeptical of any elicitation procedure with such low probabilities, but given budget constraints, I sympathize. Sheheryar p.s. The authors in the linked paper find more differences in behavior when there are fewer decisions indicating that the right choice is not to constrain the decision space.

Submitted by Anonymous on
The critique - that the stakes aren't large enough, or that perhaps this is confounded with risk aversion - should be discarded in light of the fact the authors only report across-treatment differences. These concerns should be differences out unless you think something interacts with a treatment. Can you see why one should?