Published on Development Impact

Some things to think about when conducting information experiments

This page in:

Information experiments are very popular in some branches of economics for several reasons. First, they are cheap to implement, making implementation feasible for many researchers and less reliant on raising lots of funding. Second, they can be used to get random variation in decisions related to things that it is otherwise not feasible to randomize, such as behaviors related to macroeconomic events like inflation, or beliefs around the risks of dying when migrating or attitudes towards redistribution related to beliefs about income inequality. Third, both introspection and extrospection suggest that ignorance is pervasive in the world –  there are many policies or economic decisions where it seems that many people are making decisions with far from perfect information – making it natural to think we can jump in and change this.

One of my early posts on this blog (in 2013) was titled The Illusion of Information Campaigns: Just because people don’t know about your policy, it doesn’t mean that an information campaign is needed. The basic point still stands – there are many cases where information is not the binding constraint, and we need to think about how easy it is for people to find the correct information if they really want it. That said, information experiments are increasingly used, and so it was nice to come across a new working paper by Ingar Haaland, Christopher Roth, and Johannes Wohlfart that reviews methodological issues in designing information experiments. I thought I’d highlight some of their lessons I see as most relevant for information experiments in development.

What are some recent examples of information experiments in development?

Before discussing the details of the paper, here are a few examples to fix in mind when thinking about what an information experiment could entail.

·       Migration: In February’s WBER, Maheshwor Shrestha reports on an experiment in which he gave potential Nepalese migrants information about the potential risks of dying and potential wages from working abroad, finding that this leads them to change their expectations of these risks and in turn their actual migration decisions.

·       School quality: In the 2017 AER, Andrabi, Das and Khwaja report on an experiment in Pakistan in which they provided randomly selected households with child and school test score information, and then they show how this leads to changes in learning, enrolment, and schools competition.

·       Policy decision-making: In a recent working paper, Hjort et al. conduct an experiment in Brazil in which they provide research findings to Brazilian mayors, and find that this information increases the likelihood the municipality adopts the policy.

·       Changing social norms: In a paper forthcoming in the AER, Bursztyn, González and Yanagizawa-Drott provide information about what others think about women working outside the home in Saudi Arabia, and find this leads to changes in husbands helping their wives search for jobs.

How should we think about modelling what an information experiment does?

The first lines of the introduction provide a really clear way of thinking about how to model information experiments: “Standard economic theories usually understand choices as a combination of four factors: preferences, constraints, information, and beliefs. The goal of economic experiments is typically to change some features of the choice environment to study how choices are made. Information experiments achieve this by varying the information set available to economic agents.” 

In particular, the channel they have in mind here is that a change in information leads to a change in beliefs, which in turn will affect decisions made by agents, and thus outcomes. This has implications for the causal chain you want to measure. Note here the standard economic assumption is that preferences are fixed and won’t change with information. E.g. in the migration example above, a potential migrant has a fixed risk preference and beliefs about the wages they can earn abroad and the risk of dying. The information campaign changes the information used to form these beliefs, which in turn affects the decision to migrate.

However, it is worth considering that information may also change the constraints people face in some circumstances. For example, consider a firm owner making a choice about how much to invest, subject to their risk preferences, beliefs about the returns on investment, and borrowing constraints. Providing them information about a new type of loan could affect the borrowing constraint, but not beliefs or preferences. So this leads to a first point: is your information intended to change a belief or a constraint? A related question is whether the problem is incorrect beliefs, or simply lack of knowledge – e.g. in the examples above, changing social norms is about changing beliefs about what others think about women working, whereas I would consider the Brazil research finding more passing on new knowledge about something people may not have thought much about before.

Measuring beliefs before and after the information intervention

The paper then discusses the importance and usefulness of measuring beliefs, both before and after the information intervention. They note several reasons for doing this:

1.       The expected directional response to an information treatment depends on what people believe in the first place. So you should expect heterogeneity in the response to information treatments. For example, people who already have the correct beliefs about what social norms are for working outside the home, or about the risk of dying when migrating, are less likely to respond to information telling them the correct information than someone with incorrect beliefs. Someone who underestimates the odds of dying when migrating should behave in a different way when given the correct information than someone who overestimates the odds.

2.       To interpret effect sizes and examine mechanisms, you want to be able to show whether people update from the information. Now in a randomized experiment with a pure control group, you could just measure this by comparing treatment and control posterior beliefs, but having the prior beliefs is useful for understanding which individuals update beliefs, for improved statistical power, and for the first reason above.

The tricky issue is then how to measure beliefs. The authors discuss qualitative and quantitative approaches, and whether to just get point estimates or also try to elicit distributions which capture the uncertainty in beliefs. This review paper I have with Adeline Delavande and Xavier Gine provides guidance on how to elicit subjective probabilities in developing countries. One useful methodological point is that they discuss whether you should try to incentivize responses, and they conclude, based on existing evidence, that you do not need to (and that it might even backfire).

They also note that in measuring beliefs after the intervention, you need to worry about the possibility of experimenter demand effects. Several approaches are possible: (i) separating the follow-up survey in time from when the information was delivered to make this less salient; (ii) conduct an “obfuscated” follow-up survey where a different organization and survey team does the follow-up for an ostensibly different reason, so that it is less linked; (iii) using list randomization methods or other methods to make responses anonymous; and (iv) using some of the methods to bound demand effects as discussed in this previous post. They also note demand effects are less likely if participants cannot identify the intent of the study, so that this is something you can directly ask about.

Designing the information intervention

This is the key challenge for implementation of any information intervention. The paper discusses some of the key design choices to make, but this is an area where a lot more best practices could be pulled together, especially for implementing information experiments in developing countries. A few of the things the authors note to consider are:

·       Whether to provide quantitative or qualitative information (or both) – for example, in an ongoing experiment I am doing that provides information about the risks of irregular migration from the Gambia to Europe, we provide both quantitative information about the risks of dying at different points in the journey with the stories and experiences of returnee migrants.

·       How to present the information – they emphasize the importance of making this clear (e.g. through graphs or visuals) – we made videos and illustrated visually starting with 100 migrants how many would experience difficulties in each stage of the journey.

·       Consider how credible respondents find the source of the information – if people think the source of the information is biased or untrustworthy, they may react less or in a different way. The authors suggest including direct questions at the end of the survey on how credible and accurate people found the provided information. Of course, you worry about experimenter demand effects here, and so the authors note one possibility is to elicit incentivized measures of willingness to pay for the information of interest. The Hjort et al study in Brazil provides an example of doing this – they give mayors 100 lottery tickets with a prize of a trip to the USA at the start of the experiment, and then they could save their tickets for the draw, or use some of them to learn the estimated effect size from a research study  (the information) – with a BDM procedure used to measure maximum willingness to pay.

·       Whether to include an active control group – the question is whether you should give the control group nothing, or also give them differential information. A key issue here is how to give the control group different information from the treatment group without deceiving them. One approach has been to give information about something orthogonal to the main treatment (e.g. give health information to the control group when your information treatment for the treatment group is about something else). An alternative is to have different sources of information that you can reveal – Shrestha randomly chooses a district in Nepal to give potential migrants information about, so that some get told death rates from a district that had a relatively high number of deaths, and others get told death rates from a district that had a relatively low number.

·       Consider obfuscated information treatments to blunt experimenter demand effects. This is an interesting suggestion that I hadn’t thought as much about before. They note that one possibility is to give people an unrelated reason for why they receive the information of interest. For instance, researchers could tell respondents that they need to proofread or summarize pieces of information.

What to expect in terms of effect sizes?

The authors note that effect sizes on self-reported attitudes and behavioral measures are typically much smaller in magnitude than effect sizes on belief updating. They “recommend employing a sample size of at least 700 respondents per treatment arm of interest. Furthermore, since many information experiments yield small or modest effects, it is important to have relatively large samples in order to identify a precise null finding.”

As an example, suppose you start with 1000 people. Perhaps 40% already have the correct information, and so your treatment will likely not have any effect on those people. They among the remaining 600, maybe only one-third have information/beliefs as the main factor determining their choice, whereas the other two-thirds face other constraints (e.g. lack of money, lack of time, lack of interest) that your information treatment is not alleviating. So if the treatment is not going to affect the majority of people receiving it, you need massive effect sizes or very large samples to detect an effect on those that are affected. An alternative is to figure out ways to better target the treatment at those who lack the correct information and for whom information may be the binding constraint.


Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000