MarketContent moderation
Company Profile

Content moderation

Content moderation, in the context of websites that facilitate user-generated content, is the systematic process of identifying, reducing, or removing user contributions that are irrelevant, obscene, illegal, harmful, or insulting. This process may involve either direct removal of problematic content or the application of warning labels to flagged material. As an alternative approach, platforms may enable users to independently block and filter content based on their preferences. This practice operates within the broader domain of trust and safety frameworks.

Supervisor moderation
Also known as unilateral moderation, this kind of moderation system is often seen on Internet forums. A group of people are chosen by the site's administrators (usually on a long-term basis) to act as delegates, enforcing the community rules on their behalf. These moderators are given special privileges to delete or edit others' contributions and/or exclude people based on their e-mail address or IP address, and generally attempt to remove negative contributions throughout the community. == Commercial content moderation ==
Commercial content moderation
Commercial Content Moderation is a term coined by Sarah T. Roberts to describe the practice of "monitoring and vetting user-generated content (UGC) for social media platforms of all types, in order to ensure that the content complies with legal and regulatory exigencies, site/community guidelines, user agreements, and that it falls within norms of taste and acceptability for that site and its cultural context". Industrial composition The content moderation industry is estimated to be worth US$9 billion. While no official numbers are provided, there are an estimates 10,000 content moderators for TikTok; 15,000 for Facebook and 1,500 for Twitter as of 2022. The global value chain of content moderation typically includes social media platforms, large MNE firms and the content moderation suppliers. The social media platforms (e.g. Facebook, Google) are largely based in the United States, Europe and China. The MNEs (e.g. Accenture, Foiwe) are usually headquartered in the global north or India while suppliers of content moderation are largely located in global southern countries like India and the Philippines. While at one time this work may have been done by volunteers within the online community, for commercial websites this is largely achieved through outsourcing the task to specialized companies, often in low-wage areas such as India and the Philippines. Outsourcing of content moderation jobs grew as a result of the social media boom. With the overwhelming growth of users and UGC, companies needed many more employees to moderate the content. In the late 1980s and early 1990s, tech companies began to outsource jobs to foreign countries that had an educated workforce but were willing to work for cheap. Content Labels Content Labels are given to user generated content as a way of adding more information, allowing for users to know how to navigate and utilize certain user contributions, and allowing moderators to properly organize and govern anything deemed not appropriate. Some examples of content labels can be Facts Checks, "Click to See" barriers, Sensitivity Warnings, or simply additional information. While on a specific level, content labels have only been a recognized concept for only recent years with very little research. However, the concept of information labeling, in a general sense, has been around for decades. With social media platforms taking advantage of content labels, it can create debates over whether to prioritize a user's right to free speech or risking another user's health and safety. Working conditions Employees work by viewing, assessing and deleting disturbing content. Wired reported in 2014, they may suffer psychological damage. In 2017, the Guardian reported secondary trauma may arise, with symptoms similar to PTSD. Some large companies such as Facebook offer psychological support In 2019, NPR called it a job hazard. Non-disclosure agreements are the norm when content moderators are hired. This makes moderators more hesitant to speak up about working conditions or organize. The number of tasks completed, for example labeling content as copyright violation, deleting a post containing hate-speech or reviewing graphic content are quantified for performance and quality assurance. Cognizant employees tasked with content moderation for Facebook developed mental health issues, including post-traumatic stress disorder, as a result of exposure to graphic violence, hate speech, and conspiracy theories in the videos they were instructed to evaluate. Moderators at the Phoenix office reported drug abuse, alcohol abuse, and sexual intercourse in the workplace, and feared retaliation from terminated workers who threatened to harm them. In response, a Cognizant representative stated the company would examine the issues in the report. Employees in the Tampa location described working conditions that were worse than the conditions in the Phoenix office. Similarly, Meta's outsourced moderation company in Kenya and Ghana reported mental illness, self-harm, attempted suicide, poor working conditions, low pay, and retaliation for advocating for better working conditions. Moderators were required to sign non-disclosure agreements with Cognizant to obtain the job, although three former workers broke the agreements to provide information to The Verge. In the Tampa office, workers reported inadequate mental health resources. As a result of exposure to videos depicting graphic violence, animal abuse, and child sexual abuse, some employees developed psychological trauma and post-traumatic stress disorder. In response to negative coverage related to its content moderation contracts, a Facebook director indicated that Facebook is in the process of developing a "global resiliency team" that would assist its contractors. In late 2018, Facebook created an oversight board or an internal "Supreme Court" to decide what content remains and what content is removed. Twitter Social media site Twitter has a suspension policy. Between August 2015 and December 2017, it suspended over 1.2 million accounts for terrorist content to reduce the number of followers and amount of content associated with the Islamic State. Following the acquisition of Twitter by Elon Musk in October 2022, content rules have been weakened across the platform in an attempt to prioritize free speech. However, the effects of this campaign have been called into question. ==Distributed moderation==
Distributed moderation
User moderation User moderation allows any user to moderate any other user's contributions. Billions of people are currently making decisions on what to share, forward or give visibility to on a daily basis. On a large site with a sufficiently large active population, this usually works well, since relatively small numbers of troublemakers are screened out by the votes of the rest of the community. User moderation can also be characterized by reactive moderation. This type of moderation depends on users of a platform or site to report content that is inappropriate and breaches community standards. In this process, when users are faced with an image or video they deem unfit, they can click the report button. The complaint is filed and queued for moderators to look at. == Unionization ==
Unionization
150 content moderators, who contracted for Meta, ByteDance and OpenAI gathered in Nairobi, Kenya to launch the first African Content Moderators Union on 1 May 2023. This union was launched 4 years after Daniel Motaung was fired and retaliated against for organizing a union at Sama, which contracts for Facebook. == Digital Services Act ==
Digital Services Act
The Digital Services Act (DSA) is an EU regulation that entered into force in 2022, establishing a comprehensive framework for digital services accountability, content moderation, and platform transparency across the European Union. Users can contest moderation decisions by online platforms restricting their accounts or sanctioning their content in several ways. This right also applies to notices of illegal content that were rejected by the platform. According to the DSA, users may appeal through the internal complaint-handling system of platforms. Platforms are required to promptly review their decisions. Under Article 21 of the DSA, users may refer to external bodies, which provide independent review of platform decisions. The first EU-certified out-of-court settlement body for digital disputes under the Digital Services Act (DSA) is ADROIT. Handling disputes related to content moderation on decisions as: online shopping and booking platforms, content sharing, trading, marketplaces and marketing platforms; gaming, gambling, and betting platforms; P2P. Adroit offers dispute resolution services in Dutch, English, French, German, Italian, Maltese, Portuguese, and Spanish. Another out-of-court dispute settlement body is Appeals Centre Europe, they challenge decisions by social media platforms and there is no charge for users. The platforms they currently review are Facebook, Instagram, TikTok, Pinterest, Threads and YouTube. == See also ==
tickerdossier.comtickerdossier.substack.com