Beginning October 22, a group of news outlets began publishing articles based on documents provided by Haugen's lawyers, collectively referred to as
The Facebook Papers. These articles detailed the various crimes Facebook was complicit in.
2020 U.S. elections and January 6 U.S. Capitol attack The New York Times pointed to internal discussions where employees raised concerns that Facebook was spreading content about the
QAnon conspiracy theory more than a year before the
2020 United States elections. After the election, a data scientist mentioned in an internal note that 10 percent of all U.S. views of political content were of posts alleging that the
election was fraudulent. Among the ten anonymous whistleblower complaints Whistleblower Aid filed with the SEC on behalf of Haugen, one complaint alleged that Facebook misled the company's investors and the general public about its role in
perpetuating misinformation related to the 2020 elections and political extremism that caused the
January 6 United States Capitol attack. In the weeks after the
2020 U.S. presidential election, Facebook began rolling back many content policy enforcement measures it had in place during the election despite internal company tracking data showing a rise in policy-violating content on the platform, while
Donald Trump's Facebook account had been
whitelisted in the company's XCheck program. Another of the whistleblower complaints Haugen filed with the SEC alleged that the company misled investors and the general public about enforcement of its
terms of service due to such whitelisting under the XCheck program.
Instagram's effects on teenagers The Files show that Facebook (now Meta) had been conducting internal research of how Instagram affects young users since 2018. While the findings point to Instagram being harmful to a large portion of young users, teenage girls were among the most harmed. Researchers within the company reported that "we make body issues worse for one in three teenage girls". Furthermore, internal research revealed that teen boys were also affected by negative social comparison, citing 14% of boys in the US in 2019. Instagram was concluded to contribute to problems more specific to its app use, such as social comparison among teens. Facebook published some of its internal research on September 29, 2021, saying these reports mischaracterized the purpose and results of its research.
Studying of preteens The Files show that Facebook formed a team to study preteens, set a three year goal to create more products for this demographic, and commissioned strategy papers about the long-term business prospects of attracting the preteen demographic. Some research Facebook has done includes studies on tween usage of social media apps and parent responses. Federal privacy laws, including the
Children's Online Privacy Protection Act (COPPA), restrict data collection on children under 13 years old. Internal documents from April 2021 showed plans to make apps targeting children from ages 6 to 17, by September, the head of Instagram announced the halting of development of those apps. A 2020 document from Facebook states: "Why do we care about tweens?" and answers that question by saying that "They are a valuable but untapped audience."
Violence in developing countries An internal memo seen by
The Washington Post revealed that Facebook has been aware of
hate speech and calls for violence against groups like Muslims and
Kashmiris, including posts of photos of piles of dead Kashmiri bodies with glorifying captions on its platform in India. Still, none of their publishers were blocked. Documents reveal Facebook has responded to these incidents by removing posts which violate their policy, but has not made any substantial efforts to prevent repeat offenses.
The Washington Post reported that for three years, Facebook's algorithms promoted posts receiving the new reactions (including the 'angry' reaction) from its users; giving them a score five times that of traditional likes. Years later, Facebook's researchers pointed out that posts with 'angry' reactions were much more likely to be toxic, polarizing, fake or low quality. Ignoring frequent internal calls, the company did not differentiate the 'angry' reaction from other reactions until September 2019, when its value was cut to zero (only after realizing users' dissatisfaction over their posts receiving angry reactions). There have been other cases when Facebook prioritized new features it wanted to promote, despite this turning out to be promoting toxic or radicalizing material. In 2018, Facebook overhauled its News Feed algorithm, implementing a new algorithm which favored "Meaningful Social Interactions" or "MSI". The new algorithm increased the weight of reshared material - a move which aimed to "reverse the decline in comments and encourage more original posting". While the algorithm was successful in its efforts, consequences such as user reports of feed quality decreasing along with increased anger on the site were observed. Leaked documents reveal that employees presented several potential changes to fix some of the highlighted issues with their algorithm. However, documents claim
Mark Zuckerberg denied the proposed changes due to his worry that they might cause fewer users to engage with Facebook. Documents have also pointed to another 2019 study conducted by Facebook where a fake account based in India was created and studied to see what type of content it was presented and interacted with. Results of the study showed that within three weeks, the fake account's newsfeed was being presented pornography and "filled with polarizing and graphic content, hate speech and misinformation", according to an internal company report.
Employee dissatisfaction Politico quotes several Facebook staff expressing concerns about the company's willingness and ability to respond to damage caused by the
platform. A 2020 post reads: "It's not normal for a large number of people in the 'make the site safe' team to leave saying, 'hey, we're actively making the world worse FYI.' Every time this gets raised it gets shrugged off with 'hey people change jobs all the time' but this is NOT normal." Another post from 2019 reads: “We do have reasonable metrics that can tell us when a given ranking change is likely to be causing integrity harms — even with low precision and recall, we can get a decent sense of whether a launch is increasing hate speech, or misinformation, or other harms. However, we don't have a way of effectively demoting this content in a targeted way... and even if we did, we often won't be able to launch them based on policy concerns.”
Apple's threat to remove Facebook and Instagram In 2019, following concerns about Facebook and Instagram being used to trade maids in the Middle East, Apple threatened to remove their iOS apps from the App Store. Facebook, then, promised to enforce stronger regulations, yet later stated that it was under-enforcing them. Two years later, a search on Facebook for maids would still yield results of workers. These maids have reported being starved, sold, locked in their homes, and physically assaulted.
XCheck The documents have shown a private program known as "XCheck" or "cross-check" that Facebook has employed in order to whitelist posts from users deemed as "high-profile". The system began as a quality control measure but has since grown to protect "millions of VIP users from the company's normal enforcement process". XCheck has led to celebrities and other public figures being exempt from punishment that the average Facebook user would receive from violating policies. In 2019, football player
Neymar had posted nude photos of a woman who had accused him of rape which were left up for more than a day. According to
The Wall Street Journal, "XCheck grew to include at least 5.8 million users in 2020" according to Facebook's internal documents. The goal of XCheck was "to never publicly tangle with anyone who is influential enough to do you harm".
Collaboration on censorship with the government of Vietnam In 2020,
Vietnam's communist government threatened to shut down Facebook if the social media company did not cooperate on censoring political content in the country, Facebook's (now Meta's) biggest market in
Southeast Asia. The decision to comply was personally approved by Mark Zuckerberg. By the end of 2020, it was reported by Facebook that they had increased censorship by 983% in comparison to the last report.
Suppression of political movements on its platform In 2021, Facebook developed a new strategy for addressing harmful content on their site, implementing measures which were designed to reduce and suppress the spread of movements that were deemed hateful. According to a senior security official at Facebook, the company "would seek to disrupt on-platform movements only if there was compelling evidence that they were the product of tightly knit circles of users connected to real-world violence or other harm and committed to violating Facebook's rules". As part of their recently coordinated initiative, this included less promotion of the movement's posts within users' News Feed as well as not notifying users of new posts from these pages. Specific groups that have been highlighted as being affected by Facebook's social harm policy include the
Patriot Party, previously linked to the
Capitol attack, as well as a newer German conspiracy group known as
Querdenken, who had been placed under surveillance by German intelligence after protests it organized repeatedly "resulted in violence and injuries to the police".
Facebook's AI usage concern According to
The Wall Street Journal, documents show that in 2019, Facebook reduced the time spent by human reviewers on hate-speech complaints, shifting towards a stronger dependence on their artificial intelligence systems to regulate the matter. However, internal documents from employees claim that their AI has been largely unsuccessful, seeing trouble detecting videos of cars crashing,
cockfighting, as well as understanding hate speech in foreign languages. Internal engineers and researchers within Facebook have estimated that their AI has only been able to detect and remove 0.6% of "all content that violated Facebook's policies against violence and incitement".
Inclusion of Breitbart News as trusted news source The Wall Street Journal reported that Facebook executives resisted removing the
far-right website
Breitbart News from Facebook's News Tab feature to avoid angering
Donald Trump and
Republican members of Congress, despite criticism from Facebook employees. An August 2019 internal Facebook study had found that
Breitbart News was the least trusted news source, and also ranked as low-quality, in the sources it looked at across the U.S. and Great Britain. ==
The Wall Street Journal podcast ==