Facebook's importance and scale has led to criticisms in many domains. Issues include
Internet privacy, excessive retention of user information, its
facial recognition software,
DeepFace its addictive quality and its role in the workplace, including employer access to employee accounts. Facebook has been criticized for electricity usage, tax avoidance, real-name user requirement policies, censorship and its involvement in the United States
PRISM surveillance program. According to
The Express Tribune, Facebook "avoided billions of dollars in tax using offshore companies". Facebook is alleged to have harmful psychological effects on its users, including feelings of jealousy and stress, a lack of attention and
social media addiction. According to Kaufmann et al., mothers' motivations for using social media are often related to their social and mental health. European antitrust regulator
Margrethe Vestager stated that Facebook's terms of service relating to private data were "unbalanced". Facebook has been criticized for allowing users to publish illegal or offensive material. Specifics include
copyright and
intellectual property infringement,
hate speech, incitement of rape and terrorism,
fake news, and crimes, murders, and livestreaming violent incidents. Commentators have accused Facebook of willingly facilitating the spread of such content.
Sri Lanka blocked both Facebook and WhatsApp in May 2019 after
anti-Muslim riots, the worst in the country since the
Easter Sunday bombing in the same year as a temporary measure to maintain peace in Sri Lanka. Facebook removed 3 billion fake accounts only during the last quarter of 2018 and the first quarter of 2019; in comparison, the social network reports 2.39 billion monthly active users. The consumer advocacy group
Which? claimed individuals were still utilizing Facebook to set up fraudulent five-star ratings for products. The group identified 14 communities that exchange reviews for either money or complimentary items such as watches, earbuds, and sprinklers. Facebook and its parent Meta have been subject to regulatory actions and fines over past privacy failures, including a substantial fine by the Irish Data Protection Commission for a severe 2018 breach affecting millions of accounts. Earlier, Facebook's role in the Facebook–Cambridge Analytica data scandal exposed the personal data of tens of millions of users without proper consent, leading to global criticism and legal actions. Moreover, investigative reports have highlighted ongoing issues with fraudulent and scam advertisements on Facebook platforms, revealing that the company earned significant revenue from such content even as enforcement efforts have lagged, drawing regulatory pressure worldwide.
Privacy concerns Facebook has experienced a steady stream of controversies over how it handles user privacy, repeatedly adjusting its privacy settings and policies. Since 2009, Facebook has been participating in the PRISM secret program, sharing with the US
National Security Agency audio, video, photographs, e-mails, documents and connection logs from user profiles, among other social media services. On November 29, 2011, Facebook settled
Federal Trade Commission charges that it deceived consumers by failing to keep privacy promises. In August 2013
High-Tech Bridge published a study showing that links included in Facebook messaging service messages were being accessed by Facebook. In January 2014 two users filed a lawsuit against Facebook alleging that their privacy had been violated by this practice. On June 7, 2018, Facebook announced that a bug had resulted in about 14 million Facebook users having their default sharing setting for all new posts set to "public". Its data-sharing agreement with Chinese companies such as
Huawei came under the scrutiny of US lawmakers, although the information accessed was not stored on Huawei servers and remained on users' phones. On April 4, 2019, half a billion records of Facebook users were found exposed on
Amazon cloud servers, containing information about users' friends, likes, groups, and checked-in locations, as well as names, passwords and email addresses. The phone numbers of at least 200 million Facebook users were found to be exposed on an open online database in September 2019. They included 133 million US users, 18 million from the UK, and 50 million from users in
Vietnam. After removing duplicates, the 419 million records have been reduced to 219 million. The database went offline after TechCrunch contacted the web host. It is thought the records were amassed using a tool that Facebook disabled in April 2018 after the
Cambridge Analytica controversy. A Facebook spokeswoman said in a statement: "The dataset is old and appears to have information obtained before we made changes last year...There is no evidence that Facebook accounts were compromised." Facebook's privacy problems resulted in companies like
Viber Media and
Mozilla discontinuing advertising on Facebook's platforms. A January 2024 study by
Consumer Reports found that among a self-selected group of volunteer participants, each user is monitored or tracked by over two thousand companies on average.
LiveRamp, a San Francisco-based data broker, is responsible for 96 per cent of the data. Other companies such as
Home Depot,
Macy's, and
Walmart are involved as well. In March 2024, a court in California released documents detailing Facebook's 2016 "Project Ghostbusters". The project was aimed at helping Facebook compete with
Snapchat and involved Facebook trying to develop decryption tools to collect, decrypt, and analyze traffic that users generated when visiting Snapchat and, eventually, YouTube and Amazon. The company eventually used its tool
Onavo to initiate man-in-the-middle attacks and read users' traffic before it was encrypted.
Racial bias Facebook was accused of committing "systemic" racial bias by the
Equal Employment Opportunity Commission based on the complaints of three rejected candidates and a current employee of the company. The three rejected employees, along with the Operational Manager at Facebook as of March 2021, accused the firm of discriminating against Black people. The EEOC initiated an investigation into the case in March 2021.
Shadow profiles A "
shadow profile" refers to the data Facebook collects about individuals without their explicit permission. For example, the
"like" button that appears on third-party websites allows the company to collect information about an individual's internet browsing habits, even if the individual is not a Facebook user. Data can also be collected by other users. For example, a Facebook user can link their email account to their Facebook to find friends on the site, allowing the company to collect the email addresses of users and non-users alike. Over time, countless data points about an individual are collected; any single data point perhaps cannot identify an individual, but together allows the company to form a unique "profile". This practice has been criticized by those who believe people should be able to opt-out of involuntary data collection. Additionally, while Facebook users have the ability to download and inspect the data they provide to the site, data from the user's "shadow profile" is not included, and non-users of Facebook do not have access to this tool regardless. The company has also been unclear whether it is possible for a person to revoke Facebook's access to their "shadow profile". While approximately 270,000 people used the app, Facebook's
API permitted data collection from their friends without their knowledge. At first Facebook downplayed the significance of the breach, and suggested that Cambridge Analytica no longer had access. Facebook then issued a statement expressing alarm and suspended Cambridge Analytica. Review of documents and interviews with former Facebook employees suggested that Cambridge Analytica still possessed the data. This was a violation of Facebook's
consent decree with the
Federal Trade Commission. This violation potentially carried a penalty of $40,000 ($ in dollars) per occurrence, totalling trillions of dollars. According to
The Guardian, both Facebook and Cambridge Analytica threatened to sue the newspaper if it published the story. After publication, Facebook claimed that it had been "lied to". On March 23, 2018, the
English High Court granted an application by the
Information Commissioner's Office for a warrant to search Cambridge Analytica's London offices, ending a standoff between Facebook and the Information Commissioner over responsibility. On March 25, Facebook published a statement by Zuckerberg in major UK and US newspapers apologizing over a "breach of trust". On March 26, the
Federal Trade Commission opened an investigation into the matter. The controversy led Facebook to end its partnerships with data brokers who aid advertisers in targeting users. On July 24, 2019, the FTC fined Facebook $5 billion, the largest penalty ever imposed on a company for violating consumer privacy. Additionally, Facebook had to implement a new privacy structure, follow a 20-year settlement order, and allow the FTC to monitor Facebook. Cambridge Analytica's CEO and a developer faced restrictions on future business dealings and were ordered to destroy any personal information they collected. Cambridge Analytica filed for bankruptcy. Facebook also implemented additional privacy controls and settings in part to comply with the European Union's
General Data Protection Regulation (GDPR), which took effect in May. Facebook also ended its active opposition to the
California Consumer Privacy Act. Some, such as
Meghan McCain, have drawn an equivalence between the use of data by Cambridge Analytica and the
Barack Obama's 2012 campaign, which, according to ''
Investor's Business Daily'', "encouraged supporters to download an Obama 2012 Facebook app that, when activated, let the campaign collect Facebook data both on users and their friends." Carol Davidsen, the Obama for America (OFA) former director of integration and media analytics, wrote that "Facebook was surprised we were able to suck out the whole
social graph, but they didn't stop us once they realised that was what we were doing".
DataSpii In July 2019, cybersecurity researcher Sam Jadali exposed a catastrophic data leak known as
DataSpii involving data provider DDMR and marketing intelligence company Nacho Analytics (NA). Branding itself as the "God mode for the internet", NA through DDMR, provided its members access to private Facebook photos and Facebook Messenger attachments including tax returns. DataSpii harvested data from millions of Chrome and Firefox users through compromised browser extensions. The NA website stated it collected data from millions of opt-in users. Jadali, along with journalists from
Ars Technica and
The Washington Post, interviewed impacted users, including a
Washington Post staff member. According to the interviews, the impacted users did not consent to such collection. DataSpii demonstrated how a compromised user exposed the data of others, including the private photos and Messenger attachments belonging to a Facebook user's network of friends.
Breaches On September 28, 2018, Facebook experienced a major breach in its security, exposing the data of 50 million users. The data breach started in July 2017 and was discovered on September 16. Facebook notified users affected by the exploit and logged them out of their accounts. In March 2019, Facebook confirmed a password compromise of millions of Facebook lite application users also affected millions of Instagram users. The reason cited was the storage of password as plain text instead of encryption which could be read by its employees. On December 19, 2019, security researcher Bob Diachenko discovered a database containing more than 267 million Facebook user IDs, phone numbers, and names that were left exposed on the web for anyone to access without a password or any other authentication. In February 2020, Facebook encountered a major
security breach in which its official
Twitter account was hacked by a
Saudi Arabia-based group called "
OurMine". The group has a history of actively exposing high-profile social media profiles' vulnerabilities. In April 2021,
The Guardian reported approximately half a billion users' data had been stolen including birthdates and phone numbers. Facebook alleged it was "old data" from a problem fixed in August 2019 despite the data's having been released a year and a half later only in 2021; it declined to speak with journalists, had apparently not notified regulators, called the problem "unfixable", and said it would not be advising users. In September 2024, Meta paid a $101 million fine for storing up to 600 million passwords of Facebook and Instagram users in plain text. The practice was initially discovered in 2019, though reports indicate passwords were stored in plain text since 2012.
Phone data and activity 's
virtual private network to harvest usage data on its competitors.|alt=Logo of Onavo. After acquiring
Onavo in 2013, Facebook used its Onavo Protect
virtual private network (VPN) app to collect information on users'
web traffic and app usage. This allowed Facebook to monitor its competitors' performance, and motivated Facebook to acquire WhatsApp in 2014. Media outlets classified Onavo Protect as
spyware. In August 2018, Facebook removed the app in response to pressure from Apple, who asserted that it violated their guidelines. The
Australian Competition and Consumer Commission sued Facebook on December 16, 2020, for "false, misleading or deceptive conduct" in response to the company's unauthorized use of personal data obtained from Onavo for business purposes in contrast to Onavo's privacy-oriented marketing. In 2016, Facebook Research launched Project Atlas, offering some users between the ages of 13 and 35 up to $20 per month ($ in dollars) in exchange for their personal data, including their app usage,
web browsing history,
web search history,
location history,
personal messages, photos, videos,
emails and
Amazon order history. In January 2019,
TechCrunch reported on the project. This led Apple to temporarily revoke Facebook's Enterprise Developer Program
certificates for one day, preventing Facebook Research from operating on iOS devices and disabling Facebook's internal iOS apps.
Ars Technica reported in April 2018 that the Facebook Android app had been harvesting user data, including phone calls and text messages, since 2015. In May 2018, several Android users filed a
class action lawsuit against Facebook for invading their privacy. In January 2020, Facebook launched the Off-Facebook Activity page, which allows users to see information collected by Facebook about their non-Facebook activities.
The Washington Post columnist Geoffrey A. Fowler found that this included what other apps he used on his phone, even while the Facebook app was closed, what other web sites he visited on his phone, and what in-store purchases he made from affiliated businesses, even while his phone was completely off. In November 2021, a report was published by Fairplay, Global Action Plan and Reset Australia detailing accusations that Facebook was continuing to manage their ad targeting system with data collected from teen users. The accusations follow announcements by Facebook in July 2021 that they would cease ad targeting children.
Public apologies The company first apologized for its privacy abuses in 2009. Facebook apologies have appeared in newspapers, television, blog posts and on Facebook. In May 2010, he apologized for discrepancies in privacy settings. Previously, Facebook had its privacy settings spread out over 20 pages, and has now put all of its privacy settings on one page, which makes it more difficult for third-party apps to access the user's personal information. In a 2010 report regarding privacy, a research project stated that not a lot of information is available regarding the consequences of what people disclose online so often what is available are just reports made available through popular media. In 2017, a former Facebook executive went on the record to discuss how social media platforms have contributed to the unraveling of the "fabric of society".
Content disputes and moderation Facebook relies on its users to generate the content that bonds its users to the service. The company has come under criticism both for allowing objectionable content, including conspiracy theories and fringe discourse, and for prohibiting other content that it deems inappropriate.
Misinformation and fake news in Minneapolis on April 5, 2025 Facebook has been criticized as a vector for
fake news, and has been accused of bearing responsibility for the conspiracy theory that the United States created
ISIS, false anti-
Rohingya posts being used by
Myanmar's military to fuel
genocide and
ethnic cleansing, enabling
climate change denial and
Sandy Hook Elementary School shooting conspiracy theorists, and anti-refugee attacks in Germany. The government of the
Philippines has also used Facebook as a tool to attack its critics. In 2017, Facebook partnered with fact checkers from the
Poynter Institute's international fact-checking network to identify and mark false content, though most ads from political candidates are exempt from this program. As of 2018, Facebook had over 40 fact-checking partners across the world, including
The Weekly Standard. Critics of the program have accused Facebook of not doing enough to remove false information from its website. Facebook has repeatedly amended its content policies. In July 2018, it stated that it would "downrank" articles that its
fact-checkers determined to be false, and remove misinformation that incited violence. Facebook stated that content that receives "false" ratings from its fact-checkers can be demonetized and suffer dramatically reduced distribution. Specific posts and videos that violate community standards can be removed on Facebook. In May 2019, Facebook banned a number of "dangerous" commentators from its platform, including
Alex Jones,
Louis Farrakhan,
Milo Yiannopoulos,
Paul Joseph Watson,
Paul Nehlen,
David Duke, and
Laura Loomer, for allegedly engaging in "violence and hate". In May 2020, Facebook agreed to a preliminary settlement of $52 million ($ in dollars) to compensate U.S.-based Facebook content moderators for their psychological trauma suffered on the job. Other legal actions around the world, including in Ireland, await settlement. In September 2020, the
Government of Thailand utilized the Computer Crime Act for the first time to take action against Facebook and
Twitter for ignoring requests to take down content and not complying with court orders. According to a report by
Reuters, beginning in 2020, the United States military ran a
propaganda campaign to spread disinformation about the
Sinovac Chinese
COVID-19 vaccine, including using fake social media accounts to spread the disinformation that the Sinovac vaccine contained pork-derived ingredients and was therefore
haram under
Islamic law. The campaign was described as "payback" for
COVID-19 disinformation by China directed against the U.S. In summer 2020, Facebook asked the military to remove the accounts, stating that they violated Facebook's policies on fake accounts and on COVID-19 information.
Steven Brill, co-founder of news reliability rating company
NewsGuard, criticized this decision, and described Facebook's prior fact-checking efforts as having failed to prevent misinformation.
Threats and incitement Professor
Ilya Somin reported that he had been the subject of death threats on Facebook in April 2018 from
Cesar Sayoc, who threatened to kill Somin and his family and "feed the bodies to Florida alligators". Somin's Facebook friends reported the comments to Facebook, which did nothing except dispatch automated messages. Sayoc was later arrested for the
October 2018 United States mail bombing attempts directed at Democratic politicians.
Terrorism Force v. Facebook, Inc., 934 F.3d 53 (2nd Cir. 2019) was a case that alleged Facebook was profiting off recommendations for Hamas. In 2019, the
US Second Circuit Appeals Court held that
Section 230 bars civil terrorism claims against
social media companies and internet service providers, the first federal appellate court to do so.
Hate speech In October 2020,
Pakistani Prime Minister
Imran Khan urged
Mark Zuckerberg, through a letter posted on government's
Twitter account, to ban
Islamophobic content on Facebook, warning that it encouraged
extremism and violence. In October 2020, the company announced that it would ban
Holocaust denial. In October 2022,
Media Matters for America published a report that Facebook and Instagram were still profiting off advertisements using the slur "
groomer" for
LGBT people. The article reported that Meta had previously confirmed that the use of this word for the LGBT community violates its hate speech policies.
Violent erotica There are ads on Facebook and Instagram containing sexually explicit content, descriptions of graphic violence and content promoting acts of self harm. Many of the ads are for webnovel apps backed by tech giants
Bytedance and
Tencent.
InfoWars Facebook was criticized for allowing
InfoWars to publish falsehoods and conspiracy theories. Facebook defended its actions in regard to
InfoWars, saying "we just don't think banning Pages for sharing conspiracy theories or false news is the right way to go." In early August 2018, Facebook banned the four most active
InfoWars-related pages for
hate speech.
Political manipulation ; the caption is a reference to
George Orwell's novel
Nineteen Eighty-Four, December 2008 As a dominant social-web service with massive outreach, Facebook has been used by identified or unidentified political operatives to affect public opinion. Some of these activities have been done in violation of the platform policies, creating "coordinated inauthentic behavior", support or attacks. These activities can be scripted or
paid. Various such abusive campaign have been revealed in recent years, best known being the
Russian interference in the 2016 United States elections. In 2021, former Facebook analyst within the
Spam and
Fake Engagement teams,
Sophie Zhang, reported more than 25 political subversion operations and criticized the general slow reaction time, oversightless, laissez-faire attitude by Facebook.
Influence Operations and Coordinated Inauthentic Behavior In 2018, Facebook said that during 2018 they had identified "coordinated inauthentic behavior" in "many Pages, Groups and accounts created to stir up political debate, including in the US, the Middle East, Russia and the UK." Campaigns operated by the British intelligence agency unit, called
Joint Threat Research Intelligence Group, have broadly fallen into two categories; cyber attacks and propaganda efforts. The propaganda efforts utilize "mass messaging" and the "pushing [of] stories" via social media sites like Facebook. Israel's
Jewish Internet Defense Force, the
Chinese Communist Party's
50 Cent Party and Turkey's
AK Trolls also focus their attention on social media platforms like Facebook. In July 2018, Samantha Bradshaw, co-author of the report from the
Oxford Internet Institute (OII) at
Oxford University, said that "The number of countries where formally organised
social media manipulation occurs has greatly increased, from 28 to 48 countries globally. The majority of growth comes from political parties who spread
disinformation and junk news around election periods." In October 2018,
The Daily Telegraph reported that Facebook "banned hundreds of pages and accounts that it says were fraudulently flooding its site with partisan political content – although they came from the United States instead of being associated with
Russia." In December 2018,
The Washington Post reported that "Facebook has suspended the account of Jonathon Morgan, the chief executive of a top social media research firm"
New Knowledge, "after reports that he and others engaged in an operation to spread disinformation" on Facebook and Twitter during the
2017 United States Senate special election in Alabama. In January 2019, Facebook said it has removed 783 Iran-linked accounts, pages and groups for engaging in what it called "coordinated inauthentic behaviour". In March 2019, Facebook sued four Chinese firms for selling "fake accounts, likes and followers" to amplify Chinese
state media outlets. In May 2019,
Tel Aviv-based private intelligence agency
Archimedes Group was banned from Facebook for "coordinated inauthentic behavior" after Facebook found fake users in countries in sub-Saharan Africa, Latin America and Southeast Asia. Facebook investigations revealed that Archimedes had spent some $1.1 million ($ in dollars) on fake ads, paid for in Brazilian reais, Israeli shekels and US dollars. Facebook gave examples of Archimedes Group political interference in Nigeria, Senegal, Togo, Angola, Niger and Tunisia. The Atlantic Council's Digital Forensic Research Lab said in a report that "The tactics employed by Archimedes Group, a private company, closely resemble the types of information warfare tactics often used by governments, and the Kremlin in particular." On May 23, 2019, Facebook released its Community Standards Enforcement Report highlighting that it has identified several fake accounts through artificial intelligence and human monitoring. In a period of six months, October 2018 – March 2019, the social media website removed a total of 3.39 billion fake accounts. The number of fake accounts was reported to be more than 2.4 billion real people on the platform. In July 2019, Facebook advanced its measures to counter deceptive political propaganda and other abuse of its services. The company removed more than 1,800 accounts and pages that were being operated from Russia, Thailand, Ukraine and Honduras. After Russia's invasion of Ukraine in February 2022, it was announced that the internet regulatory committee would block access to Facebook. On October 30, 2019, Facebook deleted several accounts of the employees working at the Israeli
NSO Group, stating that the accounts were "deleted for not following our terms". The deletions came after WhatsApp sued the Israeli surveillance firm for targeting 1,400 devices with
spyware. In 2020, Facebook helped found
American Edge, an anti-regulation
lobbying firm to fight anti-trust probes. The group runs ads that "fail to mention what legislation concerns them, how those concerns could be fixed, or how the horrors they warn of could actually happen", and do not clearly disclose that they are funded by Facebook. In 2020, the government of Thailand forced Facebook to take down a Facebook group called Royalist Marketplace with one million members following potentially illegal posts shared. The authorities have also threatened Facebook with legal action. In response, Facebook is planning to take legal action against the Thai government for suppression of freedom of expression and violation of human rights. In 2020, during the
COVID-19 pandemic, Facebook found that troll farms from
North Macedonia and the Philippines pushed coronavirus disinformation. The publisher, which used content from these farms, was banned. In the run-up to the
2020 United States elections, Eastern European troll farms operated popular Facebook pages showing content related to
Christians and
Blacks in America. They included more than 15,000 pages combined and were viewed by 140 million US users per month. This was in part due to how Facebook's algorithm and policies allow unoriginal viral content to be copied and spread in ways that still drive up user engagement. As of September 2021, some of the most popular pages were still active on Facebook despite the company's efforts to take down such content. In February 2021, Facebook removed the main page of the Myanmar military, after two protesters were shot and killed during the
anti-coup protests. Facebook said that the page breached its guidelines that prohibit the incitement of violence. On February 25, Facebook announced to ban all accounts of the Myanmar military, along with the "
Tatmadaw-linked commercial entities". Citing the "exceptionally severe human rights abuses and the clear risk of future military-initiated violence in Myanmar", the tech giant also implemented the move on its subsidiary,
Instagram. In March 2021,
The Wall Street Journal editorial board criticized Facebook's decision to fact-check its op-ed titled "We'll Have Herd immunity by April" written by surgeon
Marty Makary, calling it "counter-opinion masquerading as
fact checking." Facebook guidelines allow users to call for the death of public figures, they also allow praise of mass killers and 'violent non-state actors' in some situations. In 2021, former Facebook analyst within the
Spam and
Fake Engagement teams,
Sophie Zhang, reported on more than 25 political subversion operations she uncovered while in Facebook, and the general laissez-faire by the private enterprise.
Russian interference In 2018, Special Counsel
Robert Mueller indicted 13 Russian nationals and three Russian organizations for "engaging in operations to interfere with U.S. political and electoral processes, including the 2016 presidential election." Mueller contacted Facebook subsequently to the company's disclosure that it had sold more than $100,000 ($ in dollars) worth of ads to a company (
Internet Research Agency, owned by Russian billionaire and businessman
Yevgeniy Prigozhin) with links to the Russian intelligence community before the
2016 United States presidential election. In September 2017, Facebook's chief security officer
Alex Stamos wrote the company "found approximately $100,000 in ad spending from June 2015 to May 2017 – associated with roughly 3,000 ads – that was connected to about 470 inauthentic accounts and Pages in violation of our policies. Our analysis suggests these accounts and Pages were affiliated with one another and likely operated out of Russia." Clinton and Trump campaigns spent $81 million ($ in dollars) on Facebook ads. The company pledged full cooperation in
Mueller's investigation, and provided all information about the Russian advertisements. Members of the
House and
Senate Intelligence Committees have claimed that Facebook had withheld information that could illuminate the Russian propaganda campaign. Russian operatives have used Facebook polarize the American public discourses, organizing both
Black Lives Matter rallies and anti-immigrant rallies on U.S. soil, as well as anti-Clinton rallies and rallies both for and against Donald Trump. Facebook ads have also been used to exploit divisions over black political activism and Muslims by simultaneously sending contrary messages to different users based on their political and demographic characteristics in order to sow discord. Zuckerberg has stated that he regrets having dismissed concerns over Russian interference in the 2016 U.S. presidential election. Russian-American billionaire
Yuri Milner, who befriended Zuckerberg between 2009 and 2011, had
Kremlin backing for his investments in Facebook and Twitter. In January 2019, Facebook removed 289 pages and 75 coordinated accounts linked to the Russian state-owned news agency
Sputnik which had misrepresented themselves as independent news or general interest pages. Facebook later identified and removed an additional 1,907 accounts linked to Russia found to be engaging in "coordinated inauthentic behaviour". In 2018, a UK
Department for Digital, Culture, Media and Sport (DCMS) select committee report had criticized Facebook for its reluctance to investigate abuse of its platform by the Russian government, and for downplaying the extent of the problem, referring to the company as 'digital gangsters'."Democracy is at risk from the malicious and relentless targeting of citizens with disinformation and personalised 'dark adverts' from unidentifiable sources, delivered through the major social media platforms we use every day," Damian Collins, DCMS Committee Chair
Anti-Rohingya propaganda In 2018, Facebook took down 536 Facebook pages, 17 Facebook groups, 175 Facebook accounts, and 16 Instagram accounts linked to the
Myanmar military. Collectively these were followed by over 10 million people.
The New York Times reported that:
Anti-Muslim propaganda and Hindu nationalism in India A 2019 book titled
The Real Face of Facebook in India, co-authored by the journalists
Paranjoy Guha Thakurta and Cyril Sam, alleged that Facebook helped enable and benefited from the rise of
Narendra Modi's
Hindu nationalist Bharatiya Janata Party (BJP) in
India. Ankhi Das, Facebook's policy director for India and South and Central Asia, apologized publicly in August 2020 for sharing a Facebook post which called Muslims in India a "degenerate community". She said she shared the post "to reflect my deep belief in celebrating feminism and civic participation". She is reported to have prevented action by Facebook against anti-Muslim content In 2020, Facebook executives overrode their employees' recommendations that the BJP politician
T. Raja Singh should be banned from the site for
hate speech and rhetoric that could lead to violence. Singh had said on Facebook that
Rohingya Muslim immigrants should be shot and had threatened to destroy
mosques. Current and former Facebook employees told
The Wall Street Journal that the decision was part of a pattern of favoritism by Facebook toward the BJP as it seeks more business in India. Facebook also took no action after BJP politicians made posts accusing Muslims of intentionally spreading
COVID-19, an employee said. In 2020, the
Delhi Assembly began investigating whether Facebook bore blame for the
2020 religious riots in the city, claiming it had found Facebook "prima facie guilty of a role in the violence". Following a summons by a Delhi Assembly Committee, Facebook India vice-president and managing director
Ajit Mohan moved the Supreme Court, which granted him relief and ordered a stay to the summons. The Central government later backed the decision, and submitted in the court that Facebook could not be made accountable before any state assembly and the committee formed was unconstitutional. Following a fresh notice by the Delhi Assembly panel in 2021 for failing to appear before it as a witness, Mohan challenged it saying that the 'right to silence' is a virtue in present 'noisy times' and the legislature had no authority to examine him in a law and order case. In July 2021, the Supreme Court refused to quash the summons and asked Facebook to appear before the Delhi assembly panel. On September 23, 2023, it was reported that Facebook had delayed for about a year when in 2021, it removed a network of accounts ran by India's
Chinar Corps which spread disinformation that would put Kashmiri journalists in danger. The delay and the previously not publicized takedown action were due a fear that its local employees would be targeted by authorities, and that it would hurt business prospects in the country.
Company governance Early Facebook investor and former Zuckerberg mentor
Roger McNamee described Facebook as having "the most centralized decision-making structure I have ever encountered in a large company."
Nathan Schneider, a professor of media studies at the
University of Colorado Boulder argued in 2018 for transforming Facebook into a
platform cooperative owned and governed by the users. Facebook co-founder Chris Hughes stated in 2019 that CEO Mark Zuckerberg has too much power, that the company is now a
monopoly, and that, as a result, it should be split into multiple smaller companies. He called for the breakup of Facebook in an
op-ed in
The New York Times. Hughes says he is concerned that Zuckerberg has surrounded himself with a team that does not challenge him and that as a result, it is the U.S. government's job to hold him accountable and curb his "unchecked power". Hughes also said that "Mark's power is unprecedented and un-American." Several U.S. politicians agree with Hughes. EU Commissioner for Competition Margrethe Vestager has stated that splitting Facebook should only be done as "a remedy of the very last resort", and that splitting Facebook would not solve Facebook's underlying problems.
Customer support Facebook has been criticized for its lack of human
customer support. When users personal and business accounts are breached, many are forced to go through
small claims court to regain access and
restitution.
Litigation The company has been subject to repeated litigation. Its most prominent case addressed allegations that Zuckerberg broke an
oral contract with
Cameron Winklevoss,
Tyler Winklevoss, and
Divya Narendra to build the
then-named "HarvardConnection" social network in 2004. On March 6, 2018,
BlackBerry sued Facebook and its Instagram and WhatsApp subdivision for ripping off key features of its messaging app. In October 2018, a Texan woman sued Facebook, claiming she had been recruited into the sex trade at the age of 15 by a man who "friended" her on the social media network. Facebook responded that it works both internally and externally to ban sex traffickers. In 2019, British solicitors representing a
bullied Syrian schoolboy, sued Facebook over
false claims. They claimed that Facebook protected prominent figures from scrutiny instead of removing content that violates its rules and that the special treatment was financially driven. The Federal Trade Commission and a coalition of New York state and 47 other state and regional governments filed separate suits against Facebook on December 9, 2020, seeking antitrust action based on its acquisitions of Instagram and WhatsUp among other companies, calling these practices as anticompetitive. The suits also assert that in acquiring these products, they weakened their privacy measures for their users. The suits, besides other fines, seek to unwind the acquisitions from Facebook. On January 6, 2022, France's data privacy regulatory body
CNIL fined Facebook a 60 million euros for not allowing its internet users an easy refusal of
cookies along with
Google. On December 22, 2022, the Quebec Court of Appeal approved a class-action lawsuit on behalf of Facebook users who claim they were discriminated against because the platform allows advertisers to target both job and housing advertisements based on various factors, including age, gender, and even race. The lawsuit centers on the platform's practice of "micro targeting ads", claiming ads are ensured to appear only in the feeds of people who belong to certain targeted groups. Women, for example, would not see ads targeting men, while older generation men would not see an ad aimed at people between 18 and 45. == Impact ==