Syndicate content

Learning from cross-disciplinary impact evaluation: the Family Rewards CCT program in New York City

Jed Friedman's picture

I am intrigued by the promise of mixed methods in impact evaluations although to be honest not many examples of a successfully integrated mixed methods evaluation – one where the quantitative and qualitative components enhance each other and we learn equally from both – come to mind. So I was quite happy to find the evaluative work of the CCT piloted in New York City conducted by a team of developmental psychologists. A cross-disciplinary framework to impact evaluation offers the promise of an enhanced perspective beyond what economists or psychologists alone can produce. This work is a good example of such collaboration.

The Family Rewards program is a CCT in New York City with a pilot phase that ran from 2007 - 2010. The program incentivized 22 conditions relating to health (such as health insurance coverage or check-ups), education (school attendance and test scores), and work (parents’ full time work and/or job training). A participating family had the ability to earn up to $6000 per year in incentives so the benefit level was reasonably substantial for low-income households. Similar to the results from many other CCT programs around the world, researchers found increased compliance and participation with many of the incentivized conditions but little impact on longer term outcomes such as academic achievement (as assessed through standardized test scores).

The mixed methods study extends our understanding of the causal mechanisms determining the Family Rewards results through the interview of 563 parent-teenager dyads (parents and children interviewed separately) from both beneficiary and control households. The topics included time use, spending patterns, and the quality of family interactions all discussed through a structured but open-ended interview. Some of the main findings:

-          Teenagers in beneficiary families devoted more time to academic activities and less time to social activities. Although the aggregate time changes weren’t particularly dramatic, they were clearly identifiable – the percentage of teenagers who spent most of their after school time in academic activities rose to 24 percent in the treated group vs. 16 percent in the control, while the equivalent measure for majority time devoted to social activities decreased from 21 to 15 percent.

-          Beneficiary households spent more money to support academic activities (and buy supplies) as well as spent more money on leisure and entertainment. The rate of parents’ saving for their children’s future education increased by 13 percentage points.

-          There was little change in how parents and their teenagers interacted. Parents did not increase their monitoring, nor did the program increase parent-child conflict.

-          Beneficiary students did not show any greater engagement in school or any heightened sense of academic self-confidence (efficacy). On the other hand intrinsic motivation for academic success was also unaffected by the financial incentives.

-          It appears that life stressors for beneficiary teens were reduced – beneficiaries report lower levels of aggression (a 10 percentage point reduction in this rate) and less alcohol (14 percentage points less) and marijuana (7 percentage points) use. However there appeared to be no effect on the teens reported depression or anxiety.

How can these findings enhance our understanding of program performance? Well several of the same authors of this study, notably Sharon Wolf, J. Lawrence Aber, and Pamela Morris, authored a framework paper to discuss the benefits of evaluating social programs through a psychological lens. The goal of this framework is to leverage psychological theory to address select knowledge gaps in CCT performance. One such gap is why CCTs have generally had success with incentivized indicators, such as school enrollment, but not impacted longer-term outcomes, such as academic achievement.

The authors apply their framework to help identify program mediators – the mechanisms through which a program works to affect priority outcomes – as well as moderators – the factors that determine why the same program performs differently in different settings.

One example of a mediator is the alleviation of parental stress due to reduced financial strain. As a result, parents may have more time to interact with their children as well as the resources to materially invest in their children. The evaluation did find increased spending on children as a result of Family Rewards, but little change in parent-child interaction.

An example of a moderator can be found in the heterogeneity of Family Rewards impacts. While the program led to little gain in overall academic achievement for high school students, it turns out that children who were better academically prepared at baseline (as assessed through test scores) did in fact register significant increases in test scores after two years in the program. It’s quite possible that these children had the resources (in terms of ability or past parental support) to enable them to take full advantage of the incentives.

One of the more prominent psychological theories in this framework is self-efficacy theory. Self-efficacy refers to the subjectively assessed ability of an individual to accomplish certain outcomes. In the program context, self-efficacy beliefs are strongly related to academic motivation and subsequent achievement and it was hoped that Family Rewards would strengthen the self-efficacy of beneficiary students. However the incentivized conditions adopted by the program may have been too “distant” to effectively improve self-efficacy. The authors write,

Experimental evidence finds that positive incentives can promote interest and self-motivation when they enhance or validate self-efficacy beliefs… when people master a desired action they feel satisfaction and increased self-efficacy…satisfaction gained from mastering smaller and proximal sub-goals can boost self-efficacy beliefs and self-motivation for achieving larger goals. Large distal goals alone, on the other hand, can be overwhelming and are likely to fail to change immediate behavior. As a result, distal goals are less likely to engender self-efficacy beliefs and personal interest in achieving the goal. Therefore incentivizing goals related to mastering a task (i.e. a proximal goal), as opposed to performance (i.e. a distal goal) can increase self-efficacy beliefs and self-motivation in accomplishing other similar goals.

In regards to incentive programs in general, children are typically rewarded for academic achievements such as passing a test at the end of the school year. But this distal goal may be too remote for students who need to develop layers of various skills to ultimately do well on the final exam. Might incentive-based programs find better success can if they target more proximal goals such as grades (these are reported more frequently and perhaps more directly related to effort than standardized tests)? Other good choices for proximal goals may include homework assignments or participation in afterschool tutoring.

Some lessons and directions for future work from a cross-disciplinary impact evaluation…

Add new comment