Published on Development Impact

Learning to Detect Fake News: A Field Experiment to Inoculate Against Misinformation in India. Guest Post by Naman Garg

This page in:

This is the 14th in this year’s series of posts by PhD students on the job market.

A well-informed electorate is important for the functioning of democracies. Even though the internet has been crucial in expanding access to information, its overall impact is ambiguous due to countervailing effects of widespread misinformation. Citizens increasingly struggle to discern the veracity of the information online: a recent survey across 46 countries found that more than half of the people “worry about identifying the difference between what is real and fake on the internet when it comes to news.” Such high exposure to online misinformation combined with people’s inability to discern its veracity drives misperceptions about various policy issues, consequently affecting policy attitudes and behavior. This is seen as an important factor in not only influencing election outcomes, but also for driving prejudice against minorities and outgroups, thus fueling hate crimes and ethnic violence.

In my job market paper (co-authored with Monika Yadav), we conduct a large pre-registered field experiment (N=1301) with an intervention aimed at improving people’s ability to discern the veracity of the information they encounter online and reducing their misperceptions about minorities. The experiment is done in the state of Uttar Pradesh in India. This region has recently seen a large amount of engagement with online misinformation, especially on WhatsApp, which is considered to play an important role in elections. A significant portion of this misinformation targets Muslims, the largest religious minority in the country, stoking fears of the Muslim population overtaking the Hindu majority by spreading falsehoods about fertility rates and interfaith marriages. The state government has even enacted and proposed various laws to address this alleged concern of rapid change in demographic composition. Hence, in our experiment, we focus on the effect of the intervention on factual beliefs, policy attitudes, and behavior related to these issues.

Experiment Details

The intervention is to provide weekly digests with compilation of fact checks of viral misinformation by certified fact-checkers. Since most of the misinformation revolves around certain issues, following predictable patterns and techniques of manipulation, familiarity with these fact-checks can help people internalize heuristics to identify fake news. This can be understood with the help of a machine learning analogy: the intervention provides training data to learn to predict the veracity of the information they encounter on social media in future. The intervention lasted ten weeks (from mid-August to October 2021), during which treated individuals received nine digests.

Sample weekly digest

Misinformation has predictable patterns

In two of the digests, we also included narrative explainers on issues that see a lot of misinformation around them: relative trends in fertility rates and the conspiracy theory of “Love Jihad”. These explainers gave more background and context around the issues, including stories of individuals affected by the laws pertaining to them, and summarizing findings from investigations done by law enforcement agencies and independent news media organizations. Giving this information in a narrative form is crucial as it has been shown to be more effective in changing policy attitudes and behavior as compared to simple quantitative information and statistical facts.

Sample narrative explainer

We used Facebook ads to recruit participants for the experiment. The intervention was delivered through a mobile app that was custom-built by us for this experiment. In addition to the small screening survey during recruitment, participants filled out four surveys in total—one baseline survey, two follow-up surveys, and one endline survey—each at roughly an interval of three weeks.

Outcomes

We focus on two sets of outcomes. First, to measure truth assessment ability, we ask study participants about their belief in the truthfulness of various statements, some of which are true headlines picked from mainstream news sources, and others are viral misinformation that was not included in the digests. This allows us to observe two reduced form measures of truth assessment ability: true positive rate (TPR) and false positive rate (FPR), which are the probabilities of a statement being perceived to be true, conditional on it actually being true or false respectively.

Second, we measure factual beliefs, policy attitudes, and behavior related to the issues of fertility rates and allegedly coerced inter-faith marriages. For factual beliefs, we measure respondents’ knowledge about the relative decline in fertility rates of Hindus and Muslims, and their belief about the veracity of stories alleging “love Jihad” in the news and social media. For policy attitudes, we asked whether respondents felt the need for the laws to address alleged changes in demographic composition and to prevent “love jihad”. For measuring behavior, we offer study participants to donate a portion of their endline survey earnings to an NGO that helps inter-faith couples falsely accused of “love jihad”.

Results

Truth Assessment Ability. We find that the intervention substantially increases the ability to identify misinformation as it reduces FPR by eleven percentage points. However, there is also a small reduction in TPR of four percentage points. The underlying mechanisms of the impact are not obvious from these reduced form results on FPR and TPR. How much is it driven by an increase in truth discernment, i.e., the ability to distinguish between false and true news; and how much of it is driven by an increase in skepticism that decreases credulity for both false and true news?

To disentangle these two mechanisms, we estimate a micro-founded structural model that formalizes the process of truth assessment. In the model, people observe a latent signal about the accuracy of the statement, which can be thought of their subjective assessment about statement’s veracity. They use this signal to update their prior and assess the statement to be true when posterior probability is above a certain optimal threshold. The truth discernment parameter in the model corresponds to the precision of the latent signal and thus captures their ability to correctly distinguish between true and false statements. The skepticism parameter is proportional to the prior odds of a statement being false, and hence changes in skepticism are driven by changes in beliefs about the overall prevalence of false news.

Intervention increases skepticism and truth discernment, but at different paces

Figure above shows the trends in the effect of treatment on these two parameters. We can see that treatment results in an immediate increase in skepticism. As people start reading these digests, they update their prior about the prevalence of misinformation and become more skeptical. The intervention also increases truth discernment ability; however, it takes more time to learn the patterns to correctly distinguish between true and false statements as the effect only shows up in the endline survey.

Beliefs, Attitudes, and Behavior. The intervention results in more accurate factual beliefs. Treated individuals are thirteen percentage points more likely to give the correct answer about the fertility gap narrowing between Hindus and Muslims, and seven percentage points more likely to say that all or most stories about love jihad are false. This corresponds to a persuasion rate of 17% and 13% respectively of being assigned to the treatment. The digests also change policy attitudes and behavior. Treated individuals are four percentage points less likely to support the laws regarding controlling of fertility rates and “love jihad” (persuasion rate of 5%) and donate a higher amount to prevent harassment of inter-faith couples.

Intervention persuades some people that fake news is misinformation

Policy Implications

We show that familiarity with fact-checks can help people learn effective heuristics to discern the veracity of the information on social media. Some news outlets have recently started providing similar summaries or newsletters tracking viral misinformation on social media, and our experiment provides a rigorous evaluation of such efforts. As these become more common, and to the extent there is demand for such fact-checking, this is a potentially promising strategy to inoculate the electorate against the effects of misinformation on factual beliefs.

The digests, which contained narrative explainers, also result in changes in policy attitudes and behavior. This adds to the recently emerging evidence on the effectiveness of narratives in changing people’s attitudes. Future work should focus on understanding the mechanisms of the impact of narratives. This can provide useful insights about the relative importance of beliefs versus other factors in determining people’s attitudes towards outgroups.

 

Naman Garg is a PhD candidate in the economics department at Columbia University.

 


Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000