Case studies The American Trends Panel The
Pew Research Center surveyed over 10,000 adults in July 2020 to study social media's effect on politics and
social justice activism. 23% of respondents, who are adult social media users, reported that social media content has caused them to change their opinion, positively or negatively, on a political or social justice issue. 35% of those respondents cited the
Black Lives Matter movement,
police reform, and/or
race relations. Alt-right groups can share and persuade others of their controversial beliefs as bluntly and brashly as they desire, on any platform, which may have played a role in the 2016 election. Although the study could not conclude what exactly the effect was on the election, but did provide extensive research on the characteristics of
media manipulation and
trolling. Gray examined
sexism and
racism in the
online gaming community. Gamers attempt to identify the gender, sexuality, and ethnic background of their teammates and opponents through
linguistic profiling, when the other players cannot be seen. One of the most notable "
clans," Puerto Reekan Killaz, have created an online gaming space where Black and Latina women of the LGBTQIA+ community can play without risk of
racism,
nativism,
homophobia,
sexism, and
sexual harassment.
Identity Tourism often leads to
stereotyping,
discrimination, and
cultural appropriation.
Anti-Chinese Rhetoric Employed by Perpetrators of Anti-Asian Hate As of August 2020, over 2,500
Asian-Americans have reported experiencing racism fueled by
COVID-19, with 30.5% of those cases containing anti-Chinese rhetoric, according to
Stop AAPI (Asian-American/Pacific Islander) Hate. The language used in these incidents are divided into five categories: virulent animosity,
scapegoating of China, anti-immigrant
nativism, racist characterizations of Chinese, and racial
slurs. 60.4% of these reported incidents fit into the virulent animosity category, which includes phrases such as "get your Chinese ass away from me!"
Pakistan Online hate speech and
cyberbullying against
religious and ethnic minorities,
women, and other
socially marginalized groups have long been an issue that is downplayed and/or ignored in the
Islamic Republic of Pakistan. Hate Speech against
Ahmadis both online and in real life have led to their large-scale
persecution.
BytesForAll, a
South Asian initiative and an
APC member project released a study on hate speech online in Pakistan on June 7, 2014. The research included two independent phases: According to the report:
Myanmar The Internet has grown at unprecedented rates.
Myanmar is transitioning towards greater openness and access, which leads to social media negatives, such as using hate speech and calls to violence. In 2014, the
UN Human Rights Council Special Rapporteur on Minority Issues expressed her concern over the spread of
misinformation, hate speech and
incitement to violence, discrimination and hostility in the media and Internet, particularly targeted against a minority community. One challenge in this process has concerned ethnic and religious minorities. In 2013, 43 people were killed due to clashes that erupted after a dispute in the
Rakhine state in the Western Part of the country. Against this backdrop, the rapid emergence of new online spaces, albeit for a fraction of the population, has reflected some of these deeply rooted tensions in a new form. In this complex situation, a variety of actors has begun to mobilize, seeking to offer responses that can avoid further violence. Facebook has sought to take a more active role in monitoring the uses of the social network platform in Myanmar, developing
partnerships with local organizations and making guidelines on reporting problems accessible in Burmese. Local activists have been focussed upon local solutions, rather than trying to mobilize
global civil society on these issues. This is in contrast to some other online campaigns that have been able to attract the world's attention towards relatively neglected problems. Initiatives such as those promoted by the
Save Darfur Coalition for the
civil war in Sudan, or the organization
Invisible Children with the
Kony2012 campaign that denounced the atrocities committed by the
Lord Resistance Army, are popular examples. As commentaries on these campaigns have pointed out, such global responses may have negative repercussions on the ability for local solutions to be found.
Ethiopia ;2019–2020 The long-lived ethnic rivalry in
Ethiopia between the
Oromo people and the
Amhara people found a battleground on
Facebook, leading to hate speech, threats,
disinformation, and deaths. Facebook does not have
fact-checkers that speak either of the dominant languages spoken in Ethiopia, nor do they provide translations of the Community Standards, therefore hate speech on Facebook is widely unmonitored in Ethiopia. Instead, Facebook relies on activists to flag potential hate speech and disinformation, but many burned-out activists feel mistreated. In October 2019, Ethiopian activist
Jawar Mohammed falsely announced on Facebook that the police were going to detain him, citing religious and ethnic tension. This prompted the community to protest his alleged
detainment and the racial and ethnic tensions, which led to over 70 deaths. A disinformation campaign originated on Facebook centering on popular Ethiopian singer,
Hachalu Hundessa, of the Oromo ethnic group. The posts accused Hundessa of supporting their controversial Prime Minister
Abiy Ahmed, whom Oromo
nationalists disproved of for his catering to other ethnic groups. Hundessa was assassinated in June 2020 following the hateful Facebook posts, prompting public outrage. Facebook users blamed the Amhara people for Hundessa's assassination without any evidence in a long thread of hateful content.
Twitter In December 2017,
Twitter began enforcing new policies towards hate speech, banning multiple accounts as well as setting new guidelines for what will be allowed on their platform. There is an entire page in the Twitter Help Center devoted to describing their Hateful Conduct Policy, as well as their enforcement procedures. The top of this page states "Freedom of expression means little if voices are silenced because people are afraid to speak up. We do not tolerate behavior that harasses, intimidates, or uses fear to silence another person’s voice. If you see something on Twitter that violates these rules, please report it to us." Twitter's definition of
hate speech ranges from "violent threats" and "wishes for the physical harm, death, or disease of individuals or groups" to "repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone." Punishments for violations range from suspending a user's ability to tweet until they take down their offensive/ hateful post to the removal of an account entirely. In a statement following the implementation of their new policies, Twitter said "In our efforts to be more aggressive here, we may make some mistakes and are working on a robust appeals process" . . . "We’ll evaluate and iterate on these changes in the coming days and weeks, and will keep you posted on progress along the way". These changes come amidst a time when action is being taken to prevent hate speech around the globe, including new laws in Europe which pose fines for sites unable to address
hate speech reports within 24 hours.
YouTube YouTube, an
online video platform and subsidiary of the tech company
Google, allows for easy
content distribution and access for any content creator, which creates opportunity for the audience to access content that shifts right or left of the
moderate ideology common in
mainstream media. YouTube provides incentives to popular content creators, prompting some creators to optimize the
YouTuber experience and post
shock-valued content that may promote
extremist,
hateful ideas. Content diversity and
monetization on YouTube directs a broad audience to the potentially harmful content from extremists. but radical content creators still have their channels and
subscribers to keep them culturally relevant and financially afloat. The policy is worded as such: "We encourage free speech and try to defend your right to express unpopular points of view, but we don't permit hate speech. Hate speech refers to content that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes, such as: race or ethnic origin, religion, disability, gender, age, veteran status, sexual orientation/gender identity". YouTube has built in a user reporting system in order to counteract the growing trend of hate speech. Among the most popular deterrents against hate speech, users are able to anonymously report another user for content they deem inappropriate. The content is then reviewed against YouTube policy and age restrictions, and either taken down or left alone.
Facebook Facebook's terms forbid content that is harmful, threatening or which has potential to stir hatred and incite violence. In its community standards, Facebook elaborates that "Facebook removes hate speech, which includes content that directly attacks people based on their: race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender or gender identity, or serious
disabilities or
diseases." It further states that "We allow humour,
satire or social commentary related to these topics, and we believe that when people use their authentic identity, they are more responsible when they share this kind of commentary. For that reason, we ask that Page owners associate their name and Facebook Profile with any content that is insensitive, even if that content does not violate our policies. As always, we urge people to be conscious of their audience when sharing this type of content." Facebook's hate speech policies are enforced by 7,500 content reviewers, as well as many Artificial Intelligence monitors. Because this requires difficult decision making, controversy arises among content reviewers over enforcement of policies. Some users seem to feel as though the enforcement is inconsistent. One apt past example is two separate but similarly graphic postings that wished death to members of a specific religion. Both post were flagged by users and reviewed by Facebook staff. However, only one was removed even though they carried almost identical sentiments. In a quote regarding hate speech on the platform, Facebook Vice President of Global Operations, Justin Osofky stated, "We’re sorry for the mistakes we have made — they do not reflect the community we want to help build…We must do better." While the company protects against gender based hatred, it does not protect against hatred based on occupation. Facebook has been accused of holding bias when policing hate speech, citing political campaign ads that may promote hate or misinformation that have made an impact on the platform. When Facebook initially flags content that may contain hate speech, they then designate it to a Tier 1, 2, and 3 scale, based on the content's severity. Tier 1 is the most severe and Tier 3 is the least. Tier 1 includes anything that conveys "violent speech or support for death/disease/harm." Tier 2 is classified as content that slanders another user's image mentally, physically, or morally. Tier 3 includes anything that can potentially exclude or discriminate against others, or that uses slurs about protected groups, but does not necessarily apply to arguments to restrict immigration or criticism of existing immigration policies. In May 2019, it announced bans on several prominent people for violations of its prohibition on hate speech, including
Alex Jones,
Louis Farrakhan,
Milo Yiannopoulos,
Laura Loomer, and
Paul Nehlen. In 2020, Facebook added guidelines to Tier 1 that forbid
blackface, racial comparisons to animals, racial or religious
stereotypes, denial of historical events, and
objectification of women and the
LGBTQIA+ community. Hate Speech on Facebook and Instagram quadrupled in 2020, leading to the removal of 22.5 million posts on Facebook and 3.3 million posts on Instagram in the second quarter of 2020 alone. The article reported that Meta had previously confirmed that the use of this word for the LGBT community violates its hate speech policies.
Microsoft Microsoft has specific rules concerning hate speech for a variety of its
applications. Its
policy for mobile phones prohibits applications that "contain any content that advocates discrimination, hatred, or violence based on considerations of race, ethnicity, national origin, language, gender, age, disability, religion, sexual orientation, status as a veteran, or membership in any other social group." The company has also rules regarding online gaming, which prohibit any communication that is indicative of "hate speech, controversial religious topics and sensitive current or historical events."
TikTok TikTok has been criticised for lacking clear guidelines and control on hate speech, which have allowed cases of
bullying,
harassment,
propaganda, and hate speech to appear on
TikTok. It can be argued that as children are naive and easily influenced by other people and messages, they are more likely to listen and repeat what they are being shown or told. Children and parents often Other social media platforms such as
Instagram,
Twitter, and
Facebook have been active long enough to know how to battle online hate speech and vulgar content,
Citizenship education focuses on preparing individuals to be informed and responsible citizens through the study of rights, freedoms, and responsibilities and has been variously employed in societies emerging from violent conflict. One of its main objectives is raising awareness on the political, social and cultural rights of individuals and groups, including
freedom of speech and the responsibilities and social implications that emerge from it. The concern of
citizenship education with hate speech is twofold: it encompasses the knowledge and skills to identify hate speech, and should enable individuals to counteract messages of hatred. One of its current challenges is adapting its goals and strategies to the digital world, providing not only
argumentative but also
technological knowledge and skills that a citizen may need to counteract online hate speech. Multiple and complementary literacies become critical. The emergence of new technologies and social media has played an important role in this shift. Individuals have evolved from being only
consumers of media messages to producers, creators and curator of information, resulting in new models of participation that interact with traditional ones, like
voting or joining a
political party. Teaching strategies are changing accordingly, from fostering critical reception of media messages to include empowering the creation of media content. The concept of media and information literacy itself continues to evolve, being augmented by the dynamics of the Internet. It is beginning to embrace issues of
identity, ethics and rights in
cyberspace. Some of these skills can be particularly important when identifying and responding to hate speech online. Series of initiatives aimed both at providing information and practical tools for Internet users to be active
digital citizens: • ‘No place for hate' by
Anti-Defamation League (ADL),
United States; • ‘In other words' project by
Provincia di Mantova and the
European Commission; • ‘Facing online hate' by
MediaSmarts,
Canada; • ‘No hate speech movement' by Youth Department of the
Council of Europe,
Europe; • ‘Online hate' by the
Online Hate Prevention Institute,
Australia. Education is also seen as being a tool against hate speech. Laura Geraghty from the ‘No Hate Speech Movement' affirmed: "Education is key to prevent hate speech online. It is necessary to raise awareness and empower people to get online in a responsible way; however, you still need the legal background and instruments to prosecute hate crimes, including hate speech online, otherwise the preventive aspect won't help." == Sources ==