Internet manipulation often aims to change user perceptions and their corresponding behaviors. Today,
fake news,
disinformation attacks, and
deepfakes can secretly affect behavior in ways that are difficult to detect. It has been found that content that evokes high-arousal emotions (e.g. awe, anger, anxiety or with hidden sexual meaning) is more
viral and that content that holds one or many of these elements: surprising, interesting, or useful is taken into consideration. Providing and perpetuating simple explanations for complex circumstances may be used for online manipulation. Often such are easier to believe, come in advance of any adequate investigations and have a higher virality than any complex, nuanced explanations and information. (See also:
Low-information rationality) Prior collective ratings of
web content influences ones own perception of it. In 2015 it was shown that the perceived beauty of a piece of artwork in an online context varies with external influence as confederate ratings were manipulated by opinion and credibility for participants of an experiment who were asked to evaluate a piece of artwork. Furthermore, on
Reddit, it has been found that content that initially gets a few down- or upvotes often continues going negative, or vice versa. This is referred to as "bandwagon/snowball voting" by reddit users and administrators.
Echo chambers and
filter bubbles might be created by
Website administrators or
moderators locking out people with altering viewpoints or by establishing certain rules or by the typical member viewpoints of
online sub/communities or
Internet "tribes" Fake news does not need to be read but has an effect in quantity and emotional effect by its headlines and
sound bites alone. Specific points, views, issues and people's apparent prevalence can be amplified,
Social media activities and other data can be used to analyze the personality of people and predict their behaviour and preferences.
Michal Kosinski developed such a procedure. (See also:
Targeted advertising,
Personalized marketing) Scholars Daniel Susser, Beate Roessler, and Helen Nissenbaum assert that information technology makes online manipulation much easier, and that extra attention must be paid to how such technologies are used, as the effects of the manipulation are not apparent until "after the harm has already been done."
Algorithms, echo chambers and polarization Due to overabundance of online content,
social networking platforms and
search engines have leveraged
algorithms to tailor and personalize users' feeds based on their individual preferences. However, algorithms also restrict exposure to different viewpoints and content, leading to the creation of
echo chambers or
filter bubbles. With the help of algorithms, filter bubbles influence users' choices and perception of reality by giving the impression that a particular point of view or representation is widely shared. Following the 2016
referendum of membership of the European Union in the United Kingdom and the
United States presidential elections, this gained attention as many individuals confessed their surprise at results that seemed very distant from their expectations. The range of pluralism is influenced by the personalized individualization of the services and the way it diminishes choice. Five manipulative verbal influences were found in media texts. There are self-expression, semantic speech strategies, persuasive strategies, swipe films and information manipulation. The vocabulary toolkit for speech manipulation includes euphemism, mood vocabulary, situational adjectives, slogans, verbal metaphors, etc. Research on echo chambers from Flaxman, Goel, and Rao,
Pariser, and Grömping suggest that use of social media and search engines tends to increase ideological distance among individuals. Comparisons between online and off-line
segregation have indicated how segregation tends to be higher in face-to-face interactions with neighbors, co-workers, or family members, and reviews of existing research have indicated how available
empirical evidence does not support the most pessimistic views about
polarization. A 2015 study suggested that individuals' own choices drive algorithmic filtering, limiting exposure to a range of content. While algorithms may not be causing polarization, they could amplify it, representing a significant component of the new information landscape. == Research and use by intelligence and military agencies ==