Much research on how to correct misinformation has focused on
fact-checking. However, this can be challenging because the
information deficit model does not necessarily apply well to beliefs in misinformation.
Causes Factors that contribute to beliefs in misinformation are an ongoing subject of study. According to
Scheufele and Krause, misinformation belief has roots at the individual, group and societal levels. At the individual level, individuals have varying levels of skill in recognizing mis- or dis-information and may be predisposed to certain misinformation beliefs due to other personal beliefs, motivations, or emotions. At the group level,
in-group bias and a tendency to associate with like-minded or similar people can produce
echo chambers and
information silos that can create and reinforce misinformation beliefs. At the societal level, public figures like politicians and celebrities can disproportionately influence public opinions, as can mass media outlets. In addition, societal trends like political polarization, economic inequalities, declining trust in science, and changing perceptions of authority contribute to the impact of misinformation. Social media structures, which have been leveraged by politicians and news media for political and economic ends, have exacerbated the prevalence of misinformation. Historically, people have relied on journalists and other information professionals to relay facts. As the number and variety of information sources has increased, it has become more challenging for the general public to assess their
credibility. This growth of consumer choice when it comes to news media allows the consumer to choose a news source that may align with their biases, which consequently increases the likelihood that they are misinformed. Polling shows that Americans trust mass media at record-low rates, and that US young adults place similar levels of trust in information from social media and from national news organizations. The pace of the
24 hour news cycle does not always allow for adequate
fact-checking, potentially leading to the spread of misinformation. Further, the distinction between opinion and reporting can be unclear to viewers or readers. Sources of misinformation can appear highly convincing and similar to trusted legitimate sources. For example, misinformation cited with hyperlinks has been found to increase readers' trust. Trust is even higher when these hyperlinks are to scientific journals, and higher still when readers do not click on the sources to investigate for themselves. Research has also shown that the presence of relevant images alongside incorrect statements increases both their believability and shareability, even if the images do not actually provide evidence for the statements. For example, a false statement about macadamia nuts accompanied by an image of a bowl of macadamia nuts tends to be rated as more believable than the same statement without an image. Dramatic headlines may gain readers' attention, but they do not always accurately reflect scientific findings. Human cognitive tendencies can also be a contributing factor to misinformation belief. One study found that an individual's recollection of political events could be altered when presented with misinformation about the event, even when primed to identify warning signs of misinformation. Misinformation may also be appealing by seeming
novel or incorporating existing
stereotypes.
Identification Several strategies have been suggested to reduce misinformation. One approach is to evaluate source credibility and motivation of the source, as well as considering plausibility of claims. Readers tend to distinguish between unintentional misinformation and uncertain evidence from politically or financially motivated misinformation. It can be difficult to undo the effects of misinformation once individuals believe it to be true. Individuals may desire to reach a certain conclusion, causing them to accept information that supports that conclusion, and are more likely to retain and share information if it emotionally resonates with them. The SIFT Method, also called the Four Moves, is one commonly taught method of distinguishing between reliable and unreliable information. This method instructs readers to first Stop and begin to ask themselves about what they are reading or viewing - do they know the source and if it is reliable? Second, readers should Investigate the source. What is the source's relevant expertise and do they have an agenda? Third, a reader should Find better coverage and look for reliable coverage on the claim at hand to understand if there is a consensus around the issue. Finally, a reader should Trace claims, quotes, or media to their original context: has important information been omitted, or is the original source questionable? Visual misinformation presents particular challenges, but there are some effective strategies for identification. Misleading graphs and charts can be identified through careful examination of the data presentation; for example, truncated axes or poor color choices can cause confusion. Reverse image searching can reveal whether images have been taken out of their original context. There are currently some somewhat reliable ways to identify
AI-generated imagery, but it is likely that this will become more difficult to identify as the technology advances. A person's formal education level and
media literacy do correlate with their ability to recognize misinformation. People who are familiar with a topic, the processes of researching and presenting information, or have
critical evaluation skills are more likely to correctly identify misinformation. However, these are not always direct relationships. Higher overall literacy does not always lead to improved ability to detect misinformation. Context clues can also significantly impact people's ability to detect misinformation.
Martin Libicki, author of
Conquest In Cyberspace: National Security and Information Warfare, notes that readers should aim to be skeptical but not cynical. Readers should not be
gullible, believing everything they read without question, but also should not be
paranoid that everything they see or read is false.
Factors influencing susceptibility to misinformation Various demographic, cognitive, social, and technological factors can influence an individual's susceptibility to misinformation. This section examines how age, political ideology, and algorithms may affect vulnerability to false or misleading information.
Age Research suggests that age can be a significant factor in how individuals process and respond to misinformation. Some researchers have suggested that older individuals are more susceptible to misinformation than younger individuals due to cognitive decline. Other studies have found that, while this may be a factor, the issue is more complex than simply aging and experiencing cognitive decline. One notable area where cognitive decline is prevalent is repeated exposure to misinformation. A study found that older adults are more likely than younger adults to believe misinformation after repeated exposure, known as the
illusory truth effect. This is linked to declines in memory and analytical reasoning, which can make it more challenging for older adults to distinguish between true and false information. Another commonly found explanation for older adults' susceptibility to misinformation is a lack of digital literacy. According to a nationally representative study of U.S. adults by Pew Research Center from 2023, 61% of adults aged 65 years or older own a smartphone, 45% use social media, and 44% own a tablet computer. All three numbers represent an increase over the last decade, indicating that older adults are spending more time online, thereby increasing their potential exposure to misinformation. This cognitive bias fosters an environment where misinformation that aligns with one's view thrives, creating echo chambers. Researchers explored the relationship between partisanship, the presence of an echo chamber, and vulnerability to misinformation, finding a strong correlation between right-wing partisanship and the sharing of online misinformation. They also discovered a similar trend among left-leaning users. Similar research has found that right- and left-wing partisans exhibit similar levels of metacognitive awareness, which refers to individuals' conscious awareness of their own thoughts and mental processes. In a study that asked participants to identify news headlines as true or false, both Democrats and Republicans admitted to occasionally suspecting they were wrong. This finding, coupled with confirmation bias, contributes to a media ecosystem where misinformation can thrive.
Algorithms Social media algorithms are designed to increase user engagement. Research suggests that humans are naturally drawn to emotionally charged content, and algorithms perpetuate a cycle in which emotionally charged misinformation is disproportionately promoted on social media platforms. This misinformation is spread rapidly through algorithms, outpacing the speed of fact-checking. Additionally, most social media users possess a limited understanding of how algorithms curate their information feeds.|alt=In a Google search for "Joaquín Correa brother", Google's AI Overview erroneously states that "Joaquín Correa's brother is named Ángel Correa", and briefly details Ángel Correa's career. It cites the English Wikipedia article on Ángel Correa (which did not support the false relationship claim), alongside another unseen website. A note beneath the overview warns that "AI responses may include mistakes", and provides a hyperlink for further information. The rise of
Artificial intelligence has also contributed to the formation of new types of misinformation and disinformation. This is called Synthetic media according to the UNHCR Factsheet. AI is capable of manipulation and modification of data and multimedia. AI is used in algorithms nowadays to mislead audience. The presence of synthetic media could intensify fake news and supports the spread of misinformation if used in the wrong way. Deep fakes are a part of synthetic media that have gained popularity recently in which faces of people are replaced. This manipulation has garnered widespread attention for their use in fake news, hoaxes, fraud and revenge porn. Speech synthesis (Another type of synthetic media) amplifies these deep fakes by artificially producing human speech with the help of a speech computer. Synthetic media has become a concern for industries and governments which made some countries already have a national response or national institutions are working on detecting and limiting its use. However, AI also helps by contributing to the fight against misinformation. •
Deepfakes and
Synthetic media create very convincing visual, audio, and textual evidence that is difficult to distinguish from legitimate authoritative evidence. •
Internet bots and automated
Internet trolls can rapidly sow disinformation. •
Algorithmic bias plays a role in amplification of sensational and controversial material regardless of truth.
AI misinformation examples LA wildfire Hollywood sign In 2025,
California experienced a
firestorm disaster, accompanied by a massive wave of AI
disinformation, which led to misinformation. An example of such misinformation is the AI-generated images of
Hollywood Sign on fire. Jeff Zarrinnam, chairman of Hollywood Sign, said, "'They look so real that I couldn't tell if it was real or not,' he says. "'You know, if I didn't see the Hollywood Sign myself ... I would have probably believed' that it was on fire." The Hollywood sign was never on fire, but it led to multiple people contacting Jeff Zarrinnam asking if the sign was ok because they believed it to be burnt. A Microsoft AI for Good Lab study ran a test. The test, which had 12,500 global participants, chose whether an image was "Real or Artificial". In total, 287,269 images were seen by participants; only 93,490 were real images. The Microsoft AI for Good Lab study found that participants had a 62% success rate in guessing correctly. The test shows how hard it is to detect AI-generated images.
Mata v. Avianca, Inc., ChatGPT uses
large language models (LLMs) to generate text from human data (books, articles, social media, etc). LLMs are known to generate false information, whether they got their information from a parody site like
The Onion, people posting misinformation, or the LLMs just making up data aka
hallucinating. An example of such LLM hallucinations is the court case of Roberto Mata, Plaintiff, v. Aviance Inc., Defendant, in which two attorneys defending Mata submitted an AI-generated
legal motion that hallucinated court cases.
December 8th earthquake On December 8, 2025,
Japan experienced an
earthquake disaster, accompanied by multiple AI-generated videos that emerged on
social media. These AI-generated videos showed and explained how the earthquake began, what happened during it, and its aftermath. These AI-generated videos misinform the Japanese public, causing a government response warning of such fake videos.
Zelenskyy deepfakes Hackers broadcast a
deepfake video of
Volodymyr Zelenskyy telling his soldiers to surrender in 2022. Zelenskyy later disproved the deepfake.
Brown University shooting On December 13, 2025, there was a shooting at
Brown University, where the gunman wore a mask. Multiple people on social media are using AI-generated images to create a face for what the gunman would look like. AI cannot generate a real image of the gunman's face, which can lead to wrongful arrest when fake AI-generated faces are used. ==Countermeasures==