Menczer's research focuses on Web science, social networks, social media, social computation, Web mining, data science, distributed and intelligent Web applications, and modeling of complex information networks. He introduced the idea of
topical and adaptive Web crawlers, a specialized and intelligent type of
Web crawler. Menczer is also known for his work on
social phishing, a type of phishing attacks that leverage friendship information from social networks, yielding over 70% success rate in experiments (with
Markus Jakobsson);
semantic similarity measures for information and social networks; models of complex information and
social networks (with
Alessandro Vespignani and others);
search engine censorship; and
search engine bias. The group led by Menczer has analyzed and modeled how
memes, information, and misinformation spread through
social media in domains such as the
Occupy movement, the Gezi Park protests, and political elections. Data and tools from Menczer's lab have aided in finding the roots of the
Pizzagate conspiracy theory and the disinformation campaign targeting the
White Helmets, and in taking down voter-suppression bots on Twitter. Menczer and coauthors have also found a link between online
COVID-19 misinformation,
vaccination hesitancy, and deaths. Analysis by Menczer's team demonstrated the
echo-chamber structure of information-diffusion networks on
Twitter during the
2010 United States elections. The team found that conservatives almost exclusively retweeted other conservatives while liberals retweeted other liberals. Ten years later, this work received the Test of Time Award at the 15th International AAAI Conference on Web and Social Media (ICWSM). As these patterns of polarization and segregation persist, Menczer's team has developed a model that shows how social influence and unfollowing accelerate the emergence of online echo chambers. Menczer and colleagues have advanced the understanding of information virality, and in particular the prediction of what memes will go viral based on the structure of early diffusion networks and how competition for finite attention helps explain virality patterns. In a 2018 paper in
Nature Human Behaviour, Menczer and coauthors used a model to show that when agents in a social networks share information under conditions of high information load and/or low attention, the correlation between quality and popularity of information in the system decreases. An erroneous analysis in the paper suggested that this effect alone would be sufficient to explain why fake news are as likely to go viral as legitimate news on Facebook. When the authors discovered the error, they retracted the paper. Following influential publications on the detection of
astroturfing and
social bots, Menczer and his team have studied the complex interplay between cognitive, social, and algorithmic factors that contribute to the vulnerability of social media platforms and people to manipulation, and focused on developing tools to counter such abuse. Their bot detection tool, Botometer, was used to assess the prevalence of social bots and their sharing activity. Their tool to visualize the spread of low-credibility content, Hoaxy, was used in conjunction with Botometer to reveal the key role played by social bots in spreading low-credibility content during the
2016 United States presidential election. Menczer's team also studied perceptions of partisan political bots, finding that Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots. Using bot probes on Twitter, Menczer and coauthors demonstrated a conservative political bias on the platform. As social media have increased their countermeasures against malicious automated accounts, Menczer and coauthors have shown that coordinated campaigns by inauthentic accounts continue to threaten information integrity on social media, and developed a framework to detect these coordinated networks. They also demonstrated new forms of social media manipulation by which bad actors can grow influence networks and hide high-volume of content with which they flood the network. Through modeling, Menczer demonstrated that bad actors can most effectively control the spread of malicious content through a target online community by infiltrating it with inauthentic social media accounts. Menczer and colleagues realized that AI could be weaponized for this purpose by creating fake but credible content at scale and managing hard-to-detect social bots, which they discovered on Twitter/X. These findings led to a warning about malicious swarms of AI agents that can threaten democracy. Studying social media ranking/recommendation algorithms, Menczer and colleagues have shown that political audience diversity can be used as an indicator of news source reliability. After a high-profile
Science paper that suggested Facebook's feed algorithm decreases exposure to misinformation compared to the chronological feed, Menczer co-wrote a letter in Science arguing such a conclusion was unsupported. The original study was carried out around the 2020 U.S. elections, during a period in which Facebook had instituted a series of emergency changes to its algorithm to reduce the spread of misinformation. The reported decrease could have been caused by these temporary changes. ==Textbook==