You’ve seen the scenario on “Law and Order” many times: the defense lawyer tosses out a wild accusation that the person on the witness stand (or someone else related to the case) is the real killer – with no evidence whatsoever behind it. Jurors have now heard about an alternative suspect for the crime. The judge proclaims that the jurors “must disregard the last statement.” But, can they?
The “continued influence effect” of misinformation is not limited to jurors. Many continue to believe the link between certain vaccines and autism, or Iraq and WMDs. These are important matters of public health and policy.
A new paper by Ecker et al. (2011) in the Psychonomic bulletin and review (gated version here) implies that the judge should perhaps declare an immediate mistrial rather than warn the jurors. The authors try to answer several questions about the continued influence of misinformation: (i) Does the strength of encoding (of misinformation) affect its continued influence? (ii) Can retractions eliminate this effect? (iii) What if the subject is distracted while this information is being transmitted – does this have an effect on internalizing the information? The answers are interesting and somewhat depressing.
Ecker et al. have a nice paper. They actually lay out a brief hypothesis contrasting two theories, and then set out to conduct two experiments that test these. The authors suggest that people may form mental models of unfolding events and may be unwilling to update if no plausible alternative exists to fill the void. So, presenting someone with another suspect might be much more effective than a statement of “X is no longer a suspect but we don’t have any other suspects.” Another theory is that we process some information automatically in memory while other information has to be strategically retrieved. If misinformation is more likely to be supplied automatically while the strategic retrieval process fails, then the continued influence will arise. Furthermore, strategic retrieval takes some effort, so cognitive loads (i.e. distractions or other demands on memory) will weaken it.
In Experiment 1, the authors vary the strength of misinformation and the retractions by increasing the number of repetitions of either, using a well-known scenario of a fire in a warehouse. The misinformation is that the fire was caused by negligence (volatile materials stored in a closet); while the retraction is that the closet had been empty. Notice here that no alternative scenario, such as an arsonist or a spark from an outlet, is being proposed. The misinformation is repeated one or three times, while there are 0, 1, or 3 retractions (these arms are perfectly cross-randomized in a 3x2 with a control group added who received no misinformation to make 7 groups).
The subjects are quizzed 10 minutes later after a distractor task and the number of references to misinformation is the primary outcome of interest (along with an explicit question about whether they recall the retraction). The results are interesting. First, and unsurprisingly, three repetitions of misinformation (3-MI) produce more references to the misinformation than just one repetition. Second, retractions (one or three) do not eliminate the references to misinformation despite the fact that subject remember the retraction: the number of references to misinformation is substantially higher in the groups that received the retraction(s) than the control group. Third, one retraction (1-R) is as effective as three (3-R) when the encoding was weak (1-MI). However, when the encoding was strong (3-MI), 3-R was substantially more effective than 1-R (but still could not come close to eliminating the misinformation).
In experiment 2, control group was dispensed with (they had not produced a significant amount of references to misinformation) and some cognitive loading is overlaid on the MI and R: people read aloud a 7-digit number and some have to memorize it (and reproduce it later) while others have to only read it aloud. The results here are as interesting. First, load or no load, the misinformation registered at roughly the same rate. Second, and consistent with Experiment 1, the retraction with or without a load was equally effective if the encoding was weak (i.e. the misinformation was given with additional cognitive loading) but could not eliminate misinformation. On the other hand, when the encoding was strong (i.e. with no load), retractions with loads had no significant effect in correcting false information: only retractions with no loads did. Warning to reader: if you were texting on your Crackberry while you were reading this paragraph, it is likely that you did not understand it very well (and you sent the text meant for your girlfriend to your thesis advisor).
The authors conclude with a little stimulation exercise of a model that suggests that some of the information may be randomly sampled from memory and that the number corrections/retractions have to match the number of misinformation pieces to be at least somewhat effective and produce results similar to the findings in the first experiment.
So, what are the implications? The authors suggest that repetitions of misinformation are likely to be stronger if they come from multiple (and somewhat independent) sources. So, if one person is repeatedly providing misinformation, this will likely be less effective than multiple sources repeating the same misinformation with the same frequency. Combining this with the fact that retractions have to be equally vigorous to be at least somewhat effective in changing opinions, the picture is not pretty.
This gives all of us more responsibility as authors and as disseminators of research. Suppose that Tyler Cowen posts a blog about a paper, which is linked to by Chris Blattman with no additional comment other than a tip of the hat, then retweeted by Poverty Action, finally reported perhaps in a newspaper blog. If all the links subsequent to the first post relied on the first one without examining the paper or the evidence for themselves (I am not saying that they do, the names here are popular bloggers with large followings), then this gives the impression of independent sources confirming (or at least repeating) the same piece of information while, in fact, they are being completely dependent on the first report. If someone then subsequently finds a flaw with the paper, they can also write about it, but unless the first set of bloggers (or an equal number with equal credibility) also reports the correction with equal vigor, high levels of misinformation will continue. Even if they do, the misinformation will still not disappear.
Of course, this makes the job of bloggers much harder. We could try to rely on the credibility of the original source (the author of the paper, whether it is published, where it is published, who blogged about it, etc.) for our initial report. Nonetheless, will most of us bother to read every new working paper carefully before blogging about it? I am not sure, but maybe we should…
You're right: bloggers are not the only ones that might spread misinformation -- intentionally or unintentionally. People with influence, such as a vice presidential candidate can do more damage (or good) than your average shmo like myself.
You actually touch on a lot of issue that have to do with the impact and influence of blogs, which is getting to be a bigger debate, and on which we are currently writing a paper. People argue that social media is becoming much more influential and, in this paper (http://econpapers.repec.org/article/kapjculte/v_3a32_3ay_3a2008_3ai_3a4…) Tyler Cowen argues that it is journalists and news outlets who go to the prolific bloggers for information. He argues that the quality of discussion (partly via the feedback channels) is higher among blogs than in newspapers or news outlets.
Not everyone agrees, though. Sunstein (in this special issue of Public Economics: http://www.springerlink.com/content/0048-5829/134/1-2/) argues that group polarization (like linking to and reading like to reinforce their beliefs) prevents good information from being spread for deliberation -- a sentiment you seem to be expressing wrt partisanship and correcting misinformation above.
I just focused on bloggers and journalists, because that's what I am worried about these days -- how we're doing in development economics, political science, medicine, etc. We might live up to Tyler Cowen's optimistic view of blogs as great spaces for sophisticated deliberation or end up mimicking the traditional media -- just trying to figure out how to avoid the latter...
Thanks again for the thoughtful comments.
Early in the post you allude to misinformation on vaccines with autism, and WMDs with Iraq. You then end your post by talking about bloggers and their network influences.
With all due respect, blogs represent a small portion of how people consume information; especially on the policy front, I find news outlets and politician outreach far more widespread in their connectivity to voters. Wouldn't that make the specter of misinformation more threatening and sustained?
Take, in the US, the death panels comment by Sarah Palin which dominated the health care debate in 2009. The state of partisanship makes the ability to correct such misinformation even worse.
On a less public level of discourse, I sense a similar digression on blogs relating to foreign policy and language (especially the 'English is dying because of Americanisms/foreign influence/growth of Chinese and Spanish front).