Commercial Content Moderation is a term coined by
Sarah T. Roberts to describe the practice of "monitoring and vetting
user-generated content (UGC) for
social media platforms of all types, in order to ensure that the content complies with legal and regulatory exigencies, site/community guidelines, user agreements, and that it falls within norms of taste and acceptability for that site and its cultural context".
Industrial composition The content moderation industry is estimated to be worth US$9 billion. While no official numbers are provided, there are an estimates 10,000 content moderators for
TikTok; 15,000 for
Facebook and 1,500 for
Twitter as of 2022. The
global value chain of content moderation typically includes social media platforms, large
MNE firms and the content moderation suppliers. The social media platforms (e.g. Facebook, Google) are largely based in the United States, Europe and China. The MNEs (e.g.
Accenture,
Foiwe) are usually headquartered in the global north or India while suppliers of content moderation are largely located in
global southern countries like India and the Philippines. While at one time this work may have been done by volunteers within the
online community, for commercial websites this is largely achieved through
outsourcing the task to specialized companies, often in low-wage areas such as India and the Philippines. Outsourcing of content moderation jobs grew as a result of the
social media boom. With the overwhelming growth of users and UGC, companies needed many more employees to moderate the content. In the late 1980s and early 1990s, tech companies began to outsource jobs to foreign countries that had an educated workforce but were willing to work for cheap.
Content Labels Content Labels are given to user generated content as a way of adding more information, allowing for users to know how to navigate and utilize certain user contributions, and allowing moderators to properly organize and govern anything deemed not appropriate. Some examples of content labels can be Facts Checks, "Click to See" barriers, Sensitivity Warnings, or simply additional information. While on a specific level, content labels have only been a recognized concept for only recent years with very little research. However, the concept of information labeling, in a general sense, has been around for decades. With social media platforms taking advantage of content labels, it can create debates over whether to prioritize a user's right to free speech or risking another user's health and safety.
Working conditions Employees work by viewing, assessing and deleting disturbing content.
Wired reported in 2014, they may suffer psychological damage. In 2017, the Guardian reported
secondary trauma may arise, with symptoms similar to
PTSD. Some large companies such as Facebook offer psychological support In 2019, NPR called it a job hazard.
Non-disclosure agreements are the norm when content moderators are hired. This makes moderators more hesitant to speak up about working conditions or organize. The number of tasks completed, for example
labeling content as copyright violation, deleting a post containing hate-speech or reviewing graphic content are quantified for performance and
quality assurance. Cognizant employees tasked with content moderation for Facebook developed
mental health issues, including
post-traumatic stress disorder, as a result of exposure to
graphic violence,
hate speech, and
conspiracy theories in the videos they were instructed to evaluate. Moderators at the Phoenix office reported
drug abuse,
alcohol abuse, and
sexual intercourse in the workplace, and feared retaliation from
terminated workers who threatened to harm them. In response, a Cognizant representative stated the company would examine the issues in the report. Employees in the Tampa location described working conditions that were worse than the conditions in the Phoenix office. Similarly, Meta's outsourced moderation company in Kenya and Ghana reported mental illness, self-harm, attempted suicide, poor working conditions, low pay, and retaliation for advocating for better working conditions. Moderators were required to sign non-disclosure agreements with Cognizant to obtain the job, although three former workers broke the agreements to provide information to
The Verge. In the Tampa office, workers reported inadequate mental health resources. As a result of exposure to videos depicting graphic violence,
animal abuse, and
child sexual abuse, some employees developed
psychological trauma and post-traumatic stress disorder. In response to negative coverage related to its content moderation contracts, a Facebook director indicated that Facebook is in the process of developing a "global resiliency team" that would assist its contractors. In late 2018, Facebook created an
oversight board or an internal "Supreme Court" to decide what content remains and what content is removed.
Twitter Social media site Twitter
has a suspension policy. Between August 2015 and December 2017, it suspended over 1.2 million accounts for terrorist content to reduce the number of followers and amount of content associated with the Islamic State. Following the acquisition of Twitter by Elon Musk in October 2022, content rules have been weakened across the platform in an attempt to prioritize free speech. However, the effects of this campaign have been called into question. ==Distributed moderation==