Messaging apps Many companies' chatbots run on
messaging apps or simply via
SMS. They are used for
B2C customer service, sales and marketing. In 2016, Facebook Messenger allowed developers to place chatbots on their platform. There were 30,000 bots created for Messenger in the first six months, rising to 100,000 by September 2017. Since September 2017, this has also been as part of a pilot program on WhatsApp. Airlines
KLM and
Aeroméxico both announced their participation in the testing; both airlines had previously launched customer services on the Facebook Messenger platform. The bots usually appear as one of the user's contacts, but can sometimes act as participants in a
group chat. Many banks, insurers, media companies, e-commerce companies, airlines, hotel chains, retailers, health care providers, government entities, and restaurant chains have used chatbots to
answer simple questions, increase
customer engagement, for promotion, and to offer additional ways to order from them. Chatbots are also used in
market research to collect short survey responses. A 2017 study showed 4% of companies used chatbots. In a 2016 study, 80% of businesses said they intended to have one by 2020.
As part of company apps and websites Previous generations of chatbots were present on company websites, e.g. Ask Jenn from
Alaska Airlines which debuted in 2008 or
Expedia's virtual customer service agent which launched in 2011. The newer generation of chatbots includes
IBM Watson-powered "Rocky", introduced in February 2017 by the New York City-based
e-commerce company Rare Carat to provide information to prospective diamond buyers.
Chatbot sequences Used by marketers to script sequences of messages, very similar to an
autoresponder sequence. Such sequences can be triggered by user opt-in or the use of keywords within user interactions. After a trigger occurs a sequence of messages is delivered until the next anticipated user response. Each user response is used in the decision tree to help the chatbot navigate the response sequences to deliver the correct response message.
Company internal platforms Companies have used chatbots for customer support, human resources, or in
Internet-of-Things (IoT) projects.
Overstock.com, for one, has reportedly launched a chatbot named Mila to attempt to automate certain processes when customer service employees request sick leave. Other large companies such as
Lloyds Banking Group,
Royal Bank of Scotland,
Renault and
Citroën are now using chatbots instead of
call centres with humans to provide a first point of contact. In large companies, like in hospitals and aviation organizations, chatbots are also used to share information within organizations, and to assist and replace service desks.
Customer service Chatbots have been proposed as a replacement for
customer service departments. In 2026,
The Financial Times reported on agentic chatbots that could do shopping for customers once given instructions. In 2016, Russia-based Tochka Bank launched a chatbot on
Facebook for a range of financial services, including a possibility of making payments. In July 2016,
Barclays Africa also launched a Facebook chatbot.
Healthcare Chatbots are also appearing in the healthcare industry. A study suggested that physicians in the United States believed that chatbots would be most beneficial for scheduling doctor appointments, locating health clinics, or providing medication information. A 2025 review found that participants often rated chatbot responses as more empathic than those from clinicians. In 2020,
WhatsApp worked with the
World Health Organization and the
Government of India to make chatbots to answers users' questions on
COVID-19. In 2023, US-based
National Eating Disorders Association replaced its human
helpline staff with a chatbot but had to take it offline after users reported receiving harmful advice from it.
Politics In New Zealand, the chatbot SAM – short for
Semantic Analysis Machine – has been developed by Nick Gerritsen of Touchtech. It is designed to share its political thoughts, for example on topics such as climate change, healthcare and education, etc. It talks to people through Facebook Messenger. In 2022, the chatbot "Leader Lars" or "Leder Lars" was nominated for
The Synthetic Party to run in the
Danish parliamentary election, and was built by the artist collective Computer Lars. Leader Lars differed from earlier virtual politicians by leading a
political party and by not pretending to be an objective candidate. This chatbot engaged in critical discussions on politics with users from around the world. In
India, the state government has launched a chatbot for its Aaple Sarkar platform, which provides conversational access to information regarding public services managed.
Toys Chatbots have also been incorporated into devices not primarily meant for computing, such as toys.
Hello Barbie is an Internet-connected version of the doll that uses a chatbot provided by the company ToyTalk, which previously used the chatbot for a range of smartphone-based characters for children. These characters' behaviors are constrained by a set of rules that in effect emulate a particular character and produce a storyline. The
My Friend Cayla doll was marketed as a line of dolls which uses
speech recognition technology in conjunction with an
Android or
iOS mobile app to recognize the child's speech and have a conversation. Like the Hello Barbie doll, it attracted controversy due to vulnerabilities with the doll's
Bluetooth stack and its use of data collected from the child's speech. IBM's
Watson computer has been used as the basis for chatbot-based educational toys for companies such as
CogniToys, Malicious use Malicious chatbots are frequently used to fill
chat rooms with spam and advertisements by mimicking human behavior and conversations or to entice people into revealing personal information, such as bank account numbers. They were commonly found on
Yahoo! Messenger,
Windows Live Messenger,
AOL Instant Messenger and other
instant messaging protocols. There has also been a published report of a chatbot used in a fake personal ad on a dating service's website.
Tay, an AI chatbot designed to learn from previous interactions, caused major controversy after being targeted by internet trolls on Twitter. Soon after its launch, the bot was exploited, and with its "repeat after me" capability, it started releasing racist, sexist, and controversial responses to Twitter users. This suggests that although the bot learned effectively from experience, adequate protection was not put in place to prevent misuse. If a text-sending
algorithm can pass itself off as a human instead of a chatbot, its message would be more credible. Therefore, human-seeming chatbots with well-crafted online identities could start scattering fake news that seems plausible, for instance making false claims during an election. With enough chatbots, it might be even possible to achieve artificial
social proof.
Data security Data security is one of the major concerns of chatbot technologies. Security threats and system vulnerabilities are weaknesses that are often exploited by malicious users. Storage of user data and past communication, that is highly valuable for training and development of chatbots, can also give rise to security threats. Chatbots operating on third-party networks may be subject to various security issues if owners of the third-party applications have policies regarding user data that differ from those of the chatbot. This is because chatbots can give a sense of privacy and anonymity when sharing sensitive information, as well as providing a space that allows for the user to be free of judgment. Findings prove that chatbots have great potential in scenarios in which it is difficult for users to reach out to family or friends for support. These being guided conversation, semi guided conversation, and open ended conversation. There are ongoing privacy concerns with sharing user's personal data in chat logs with chatbots. Another notable risk is a general lack of a strong understanding of mental health. in people already prone to delusional and conspiratorial thinking. This is caused in part by chatbots "hallucinating" information, as they are designed for engagement, and to keep people talking. == Limitations ==