is made by advocates due to its increase in social networking sites. Despite the proliferation of social media websites, Facebook and Twitter showed the most activity in terms of active disinformation campaigns. Techniques reported on included the use of bots to amplify hate speech, the illegal harvesting of data, and paid trolls to harass and threaten journalists. Whereas disinformation research focuses primarily on how actors orchestrate deceptions on social media, primarily via
fake news, new research investigates how people take what started as deceptions and circulate them as their personal views. As a result, research shows that disinformation can be conceptualized as a program that encourages engagement in oppositional fantasies (i.e.,
culture wars), through which disinformation circulates as rhetorical ammunition for never-ending arguments. Current research suggests right-wing online political
activists in the United States may be more likely to use disinformation as a strategy and tactic. The
2016 European Union referendum in the UK also saw British politicians supporting the Leave campaign spread disinformation on Twitter. Governments have responded with a wide range of policies to address concerns about the potential threats that disinformation poses to democracy, however, there is little agreement in elite policy discourse or academic literature as to what it means for disinformation to threaten democracy, and how different policies might help to counter its negative implications.
Consequences of exposure to disinformation online There is a broad consensus amongst scholars that there is a high degree of disinformation, misinformation, and propaganda online; however, it is unclear to what extent such disinformation has on political attitudes in the public and, therefore, political outcomes. This
conventional wisdom has come mostly from investigative journalists, with a particular rise during the 2016 U.S. election: some of the earliest work came from Craig Silverman at Buzzfeed News. Cass Sunstein supported this in
#Republic, arguing that the internet would become rife with
echo chambers and informational cascades of misinformation leading to a highly polarized and ill-informed society. Later studies have since proven the existence of echo chambers on social media. Research after the 2016 US presidential election found: (1) for 14 percent of Americans social media was their "most important" source of election news; 2) known false news stories "favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times"; 3) the average American adult saw fake news stories, "with just over half of those who recalled seeing them believing them"; and 4) people are more likely to "believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks." Correspondingly, whilst there is wide agreement that the digital spread and uptake of disinformation during the 2016 election was massive and very likely facilitated by foreign agents, there is an ongoing debate on whether all this had any actual effect on the election. For example, a double blind randomized-control experiment by researchers from the London School of Economics (LSE), found that exposure to online fake news about either Trump or Clinton had no significant effect on intentions to vote for those candidates. Researchers who examined the influence of Russian disinformation on Twitter during the 2016 US presidential campaign found that exposure to disinformation was (1) concentrated among a tiny group of users, (2) primarily among Republicans, and (3) eclipsed by exposure to legitimate political news media and politicians. Finally, they find "no evidence of a meaningful relationship between exposure to the Russian foreign influence campaign and changes in attitudes, polarization, or voting behavior." As such, despite its mass dissemination during the 2016 Presidential Elections, online fake news or disinformation probably did not cost Hillary Clinton the votes needed to secure the presidency. Research on this topic remains inconclusive, for example, misinformation appears not to significantly change political knowledge of those exposed to it. There seems to be a higher level of diversity of news sources that users are exposed to on Facebook and Twitter than conventional wisdom would dictate, as well as a higher frequency of cross-spectrum discussion. Other evidence has found that disinformation campaigns rarely succeed in altering the foreign policies of the targeted states. Research is also challenging because disinformation is meant to be difficult to detect and some social media companies have discouraged outside research efforts. For example, researchers found disinformation made "existing detection algorithms from traditional news media ineffective or not applicable...[because disinformation] is intentionally written to mislead readers...[and] users' social engagements with fake news produce data that is big, incomplete, unstructured, and noisy."
Alternative perspectives and critiques Researchers have criticized the framing of disinformation as being limited to technology platforms, removed from its wider political context and inaccurately implying that the media landscape was otherwise well-functioning. "The field possesses a simplistic understanding of the effects of media technologies; overemphasizes platforms and underemphasizes politics; focuses too much on the United States and Anglocentric analysis; has a shallow understanding of political culture and culture in general; lacks analysis of race, class, gender, and sexuality as well as status, inequality, social structure, and power; has a thin understanding of journalistic processes; and, has progressed more through the exigencies of grant funding than the development of theory and empirical findings." Alternative perspectives have been proposed: • Moving beyond
fact-checking and media literacy to study a pervasive phenomenon as something that involves more than news consumption. • Moving beyond
technical solutions including AI-enhanced
fact checking to understand the systemic basis of disinformation. • Develop a theory that goes beyond
Americentrism to develop a global perspective, understand cultural imperialism and Third World dependency on Western news
, and understand disinformation in the Global South. • Develop
market-oriented disinformation research that examines the financial incentives and
business models that nudge content creators and
digital platforms to circulate disinformation online. == Strategies for spreading disinformation ==