This blog has now featured a healthy debate between researchers advocating randomized evaluations and those cautioning the overuse of such methods. One point that I believe both sides would agree on is that irrespective of which empirical methods we use, it is important to understand and analyze the causal chain of impact. Such analysis can greatly enhance the external validity of any evaluation.
In this blog post, I want to talk about an evaluation of financial literacy where we are trying to do just that. But first some background on financial literacy and its fast growing popularity.
Financial literacy has come to play an increasingly important role in financial reform across the world. Survey after survey has shown that financial literacy levels are low in both developed and developing countries, and that they strongly predict financial behavior and formal sector participation (see the International Gateway for Financial Education website for a comprehensive collection of research in this area).
Motivated by this evidence, governments, firms, and non-profit organizations are responding to the perceived problem of limited financial literacy with a range of public and privately-provided financial education programs, targeted to reach tens or even hundreds of millions of individuals in the coming years. Yet, to date, there is very little rigorous evidence on which policies are effective and for whom (see my earlier blog post on “The Fad of Financial Literacy?”).
While some rigorous evaluations of financial literacy programs are now underway, the focus seems to be mostly on measuring end outcomes such as behavior change or financial product take-up, and not much on the mechanism of impact – i.e. why and how do financial literacy programs impact financial behavior?
In new work from India, in collaboration with Shawn Cole, Jeremy Shapiro, and Fenella Carpena, we are evaluating a comprehensive video-based financial literacy program for households in Ahmadabad. This is a fairly large randomized evaluation (1,200 individuals, randomized at the individual level) with several orthogonal treatments being administered, and is a follow-up of Shawn and my previous work in Indonesia (you can see that paper here). What I want to focus on in this blog is the first component of our treatment analysis, the impact of financial literacy on financial knowledge; in other words, the first link in the causal chain.
Financial knowledge has typically been measured using standard questions that rely primarily on the numeric and computational ability of respondents, for example asking them to compare two different loan repayment plans, one stated as a percentage APR and the other as a fixed cash payment. It is no surprise that nearly all surveys show a strong correlation between financial literacy score and mathematical ability.
But is this really a measure of financial knowledge? On the one hand, many financial choices we face in the real world require calculating interest rates and estimating returns, so perhaps financial knowledge and mathematical ability go hand in hand. However, financial literacy programs may also affect financial decision-making through other channels, for instance by making individuals and households more aware of product choices available to them, equipping them to ask the right questions of financial providers, encouraging them to seek professional and personalized financial advice, and changing their attitudes towards purchasing and recommending formal financial products and services. These alternate channels may be as important, if not more, than enhancing numeracy skills.
In our work in India, we use survey questions designed to test impacts of our financial literacy program on three distinct dimensions of financial knowledge: (i) numeracy skills, (ii) basic financial awareness, and (iii) attitudes towards financial decisions. These questions were asked as part of the first wave of follow-up surveys administered to both treatment and control groups of our experiment sample, soon after the treatments were administered.
Our results are quite telling. We find that financial education has a fairly limited role in equipping individuals to evaluate complex financial trade-offs that require high numeracy skills. We do not find that financial education permits individuals to choose the loan option that minimizes expense, to select the most appropriate savings or insurance product, or to create a budget effectively. What is striking is that even individuals who were provided monetary incentives for their performance on the knowledge test (we had an orthogonal pay-for-performance treatment) were unable to answer such questions correctly. These results indicate that mathematics skills are critical when determining financial trade-offs, and that a financial education program that does not specifically address numeracy has little impact on an individual’s ability to make financial calculations. Indeed, combining financial education with mathematics training may be necessary to improve financial numeracy.
In contrast, we find that the financial literacy program we study leads to large and statistically significant improvements in individuals’ awareness of financial products and services available to them, as well as their familiarity about the details of such products and services. Specifically, individuals who received financial literacy training were 15% more likely to know minimum bank account opening requirements, and 20% more likely to understand unproductive loans.
We likewise find that financial education changes respondents’ attitudes towards purchasing and recommending formal financial services or financial planning tools. In particular, when hypothetically asked to give financial advice, financial literacy treated individuals were 9% more likely to suggest buying health/life/accident insurance to a friend whose job as a construction worker was described as highly risky. Additionally, financial literacy treated individuals were 19% more likely to suggest budgeting to a friend who was reportedly clueless about her household’s income and expenditure.
Our results have strong implications for how financial literacy is measured and evaluated. The finding that financial literacy programs do not immediately make recipients better financial product evaluators, even when offered a financial incentive to be attentive, suggests that financial literacy measurement should not exclusively focus on questions that require high numeracy skills, but rather include a healthy mix of basic financial awareness and attitudinal questions. Our paper provides a starting list of such questions.
Apart from measurement, our results suggest a sequential channel through which financial literacy may be effective. Specifically, financial skills may evolve by first increasing awareness and changing attitudes, then improving ability to compare financial products, and subsequently improving financial behavior. The results of this paper provide strong evidence for the first step in this sequence. Our follow-up surveys that are currently underway will allow us to test whether the improvements in awareness and attitudes enable individuals who have received the training to subsequently access appropriate financial products and make better financial decisions.
Note: The overall project is entitled “The ABCs of Financial Literacy: Experimental Evidence on Attitudes, Behavior, and Cognitive Biases.” For those interested in this topic, watch this space and the authors’ web pages for more results in the coming months as more surveys trickle in.
Bilal Zia is an Economist in the World Bank's Finance and Private Sector Development Unit of the Development Research Group. His research can be found here.
Join the Conversation