Marketβ€ΊAlgospeak
Company Profile

Algospeak

In social media, algospeak is a self-censorship phenomenon in which users adopt coded expressions to evade real or imagined automated content moderation. It allows users to discuss topics deemed sensitive to moderation algorithms while avoiding penalties such as shadow banning, downranking, or de-monetization of content. A type of netspeak, algospeak primarily serves to bypass censorship, though it can also reinforce group belonging, especially in marginalized communities. Algospeak has been identified as one source of linguistic change in the modern era, with some terms spreading into everyday offline speech and writing. The term has been used more broadly to include any language change driven by digital usage.

History
The term algospeak–a blend of Algorithm and -speakβ€”appears to date back to 2021, though related ideas have existed for longer. In 2018, the internet researcher Emily van der Nagel coined the terms Voldemorting and screenshotting, two strategies social media users use to avoid giving attention to objectionable figures or attracting algorithmic attention from unwanted audiences. and involves obfuscating the referent of a post by avoiding directly mentioning a name or a term. Screenshotting refers to the act of sharing screenshots instead of machine-readable text. The term algospeak gained wider recognition in 2022 after Taylor Lorenz featured it in an article for The Washington Post. In 2025, Adam Aleksic published Algospeak, the first monograph dedicated to the phenomenon. It proposes an expanded definition which encompasses any language change that is primarily driven by the constraints of digital platforms. == Causes and motivations ==
Causes and motivations
Many social media platforms rely on automated content moderation systems to enforce their guidelines, which the users often have no control over and may be changed at any time. Automated moderation may miss important context; for example, benign communities who aid people who struggle with self-harm, suicidal thoughts, or past sexual violence may inadvertently receive unwarranted penalties. An interview with nineteen TikTok creators revealed that they felt TikTok's moderation lacked contextual understanding, appeared random, was often inaccurate, and exhibited bias against marginalized communities. Euphemisms like "cheese pizza" are used to refer to child pornography. On TikTok, moderation decisions can result in consequences such as account bans and deletion or delisting of videos from the main video discovery page, called the "For You" page. In response, a TikTok spokeswoman told The New York Times that the users' fears are misplaced, saying that many popular videos discuss sex-adjacent topics. == Methods ==
Methods
Algospeak uses techniques akin to those used in Aesopian language to conceal the intended meaning from automated content filters, while being understandable to human readers. Some may involve intersemiotic translation, where non-linguistic signs are interpreted linguistically, in addition to further obfuscation. For example, the corn emoji "🌽" may signify pornography by means of pornβ†’cornβ†’πŸŒ½. In an interview study, most creators that were interviewed suspected TikTok's automated moderation was scanning the audio as well, leading them to also use algospeak terms in speech. Some also label sensitive images with innocuous captions using algospeak, such as captioning a scantily-dressed body as "fake body". A notable example is the use of the watermelon emoji on social media as a pro-Palestinian symbol in place of the Palestinian flag in order to avoid censorship by Facebook and Instagram. Black creators may simply present their light-colored palms to the camera to stand in for white people, and flip them to stand in for black people. == Impact and detection ==
Impact and detection
A 2022 poll showed that nearly a third of American social media users reported using "emojis or alternative phrases" to subvert content moderation. In an interview study, creators shared that the evolving nature of content moderation pressures them to constantly innovate their use of algospeak, which makes them feel less authentic. Another study shows that sentiment analysis models often rate negative comments incorporating simple letter–number substitution and extraneous hyphenation more positively. == Examples ==
Examples
According to The New York Times: β€’ accountant – sex worker β€’ cornucopia – homophobia β€’ le dollar bean – lesbian, as derived from the written form Le$bian β€’ leg booty – the LGBTQ+ community β€’ nip nops – nipples β€’ panini, panoramic – a pandemic, especially the COVID-19 pandemic β€’ seggs – sex Other examples: β€’ ''acoustic, artistic, 'tism'' – autistic β€’ blink in lio – link in bio β€’ camping – abortion β€’ cheese pizza – child pornography β€’ fork – fuck β€’ grape – rape β€’ Yahtzee – Nazi β€’ juice – Jew β€’ pew pew – guns β€’ Keep Yourself Safe – whose initials stand for 'kill your self' β€’ music festival – protest β€’ mustache man, Austrian painter – Adolf Hitler β€’ opposite of love – hatred β€’ ouid – weed β€’ Panda Express – pandemic β€’ PDF file or PDF – pedophile β€’ pinwheel – Swastika β€’ regarded, restarted – retarded β€’ sewer slide – suicide β€’ shmex – sex β€’ yt – White people, though yt is also a common abbreviation for YouTube ==See also==
tickerdossier.comtickerdossier.substack.com