Misinformation and harmful content False news stories spread faster and more broadly than accurate stories on
Twitter (now X), according to a 2018 study, although the authors attributed this primarily to human sharing behaviour rather than platform algorithms. Concerns about the recommendation of borderline or conspiratorial material led YouTube to announce changes in January 2019 aimed at reducing recommendations of videos that approached but did not violate the platform's rules. A 2024 study using experimental "counterfactual bots" to isolate the causal role of YouTube's recommender found that, on average, the algorithm pushed users towards more moderate content rather than more extreme material. This moderating effect was strongest for heavy consumers of
partisan content, and the authors concluded that individual user preferences played a larger role than algorithmic recommendations in determining consumption patterns. A 2025 algorithmic audit of X found that the platform's engagement-based ranking algorithm amplified content that was emotionally charged and hostile towards members of opposing political groups, compared to a reverse-chronological baseline. Users did not prefer the political content selected by the engagement-based algorithm when asked to evaluate it directly, suggesting a gap between what drives engagement and what users report valuing. Human rights investigations have also linked algorithmic amplification to mass violence.
Amnesty International argued that Facebook's news feed, groups, and recommendation features actively amplified anti-
Rohingya hatred in
Myanmar in the years preceding the 2017 atrocities, helping to intensify the circulation of divisive and inflammatory content. The
2019 Christchurch mosque attacks in New Zealand, in which the attacker live-streamed the shooting, drew attention to the role of recommendation systems in redistributing terrorist content. The
Christchurch Call, a multilateral commitment adopted by governments and technology companies in May 2019, identified algorithmic amplification as a factor in the spread of terrorist and violent extremist content and committed signatories to review how recommendation algorithms direct users towards such material. In 2022, the Call launched the Christchurch Call Initiative on Algorithmic Outcomes (CCIAO) to develop privacy-preserving tools for independent researchers to audit recommendation systems for radicalisation pathways. The legal question of whether platforms bear liability for algorithmically recommending terrorist content reached the
Supreme Court of the United States in
Gonzalez v. Google LLC (2023), in which the family of a victim of the November 2015
Paris attacks argued that YouTube's recommendation algorithm had directed users towards
ISIS recruitment videos. The Court declined to rule on the
Section 230 question, disposing of the case on other grounds. Whether recommendation algorithms actively drive users towards extremist content remains disputed. A 2021 peer-reviewed study found that while extremist and fringe content did appear in platform recommendations, policymakers had yet to grasp the difficulty of "de-amplifying" legal but borderline material, and that the conceptual distinction between users' own choices and algorithmic effects was often unclear in both academic and policy discussion. A 2026 BBC investigation based on testimony from more than a dozen whistleblowers and former employees at Meta and TikTok reported that competitive pressure between the two companies led to safety trade-offs in content recommendation. A former senior Meta researcher shared internal research showing that comments on Instagram Reels, launched in 2020 as Meta's response to TikTok, had significantly higher rates of hostile speech than the main Instagram feed: 75 per cent higher for bullying and harassment, 19 per cent higher for hate speech, and 7 per cent higher for violence and incitement. A former Meta engineer said that senior management had directed his team to allow more borderline harmful content in users' feeds to compete with TikTok, linking the decision to the company's falling share price. Separate internal documents shared with the BBC described how Facebook's engagement-based algorithm rewarded negativity and that the company's algorithmic incentives were not aligned with its stated mission. Meta denied the claims, stating that it had strict policies to protect users and had invested significantly in safety over the preceding decade.
Creator visibility and economic effects Algorithmic ranking shapes visibility around engagement and audience retention, and a small number of dominant platforms concentrate the distribution of online attention. Content producers who depend on platforms for distribution have become dependent on opaque and frequently changing ranking systems for visibility and revenue, and news organisations have been particularly affected because competition for algorithmically directed attention can favour material that attracts engagement over more resource-intensive reporting. The rapid adoption of
generative artificial intelligence tools from 2023 onwards has lowered the cost of content production, increasing the volume of material available for recommendation systems to rank. Economic modelling has suggested that when such tools produce content of middling quality, the resulting increase in volume can congest recommendation systems and reduce the visibility of higher-quality material, harming both consumers and professional creators.
Political content and polarisation Research on whether algorithmic recommendation amplifies political content in a particular ideological direction has produced varying findings across platforms, methodologies, and time periods. Studies of X have identified directional effects, while experimental work on Facebook and YouTube has found more limited attitudinal effects. More recent experimental work has provided causal evidence that engagement-based ranking can shift political attitudes, with effects that, while modest at the individual level, may be significant when aggregated across millions of users over extended periods. However, a study that tracked real users' Google Search activity during the 2018 and 2020 US elections found that partisan identification had a small and inconsistent relationship with the news sources Google's algorithm showed users, but a larger and more consistent relationship with the sources users chose to click on and engage with. The authors concluded that user choice, rather than algorithmic curation, was the primary driver of exposure to partisan and unreliable news through search.
Mental health and minors The effects of algorithmic recommendation on young users' mental health have become a subject of policy debate in multiple jurisdictions. A
Wall Street Journal investigation found that TikTok's algorithm could narrow recommendations towards material related to self-harm, eating disorders, or drug use within hours of a user showing interest in adjacent content. A 2023 Amnesty International report reached similar conclusions about TikTok's
For You feed, arguing that targeted recommendations could rapidly intensify exposure to depressive and self-harm-related material among vulnerable young users. A TikTok trust and safety employee who spoke to the BBC in 2026 said that the company's internal case prioritisation system rated complaints from politicians as higher priority than reports of harm involving minors. In one example shown to the BBC, a political figure who had been mocked online was prioritised over a 16-year-old in Iraq who reported that sexualised images purporting to be of her were being shared on the platform. The employee said that the company prioritised political cases to maintain relationships with governments and avoid regulatory action, rather than because of the severity of the harm reported. TikTok rejected this characterisation, stating that specialist workflows for political content did not result in the deprioritisation of
child safety cases, which were handled by dedicated teams within separate review structures. These concerns have informed legislative activity. The Kids Online Safety Act (KOSA), introduced in the
United States Senate in 2022 and reintroduced in subsequent sessions, would require platforms to allow minors to disable personalised algorithmic recommendations and impose a duty of care regarding harms arising from platform design. The bill passed the Senate in July 2024 but did not complete passage through the House before the end of the
118th Congress; it was reintroduced in 2025. New York's
Stop Addictive Feeds Exploitation (SAFE) For Kids Act, signed into law in 2024, requires platforms to default to chronological feeds for users under 18 unless parental consent is obtained. In the United Kingdom, the communications regulator
Ofcom published draft Children's Safety Codes of Practice under the Online Safety Act 2023 requiring services with recommender systems to filter harmful content from children's feeds.
State use and control Research has examined how state actors interact with platform visibility systems, both by producing content designed for algorithmic distribution and by deploying automated accounts to shape what is seen. The Chinese government has operated a large-scale decentralised propaganda network on
Douyin (the Chinese version of TikTok), in which tens of thousands of regime-affiliated accounts produced and disseminated content through the platform's recommendation infrastructure. The decentralised model allowed state messaging to reach fragmented audiences more effectively than traditional top-down propaganda. ==Methods of study==