The development of generative AI has raised concerns from
governments, businesses, and individuals, resulting in protests, legal actions, calls to
pause AI experiments, and actions by multiple governments. In a July 2023 briefing of the
United Nations Security Council,
Secretary-General António Guterres stated "Generative AI has enormous potential for good and evil at scale", that AI may "turbocharge global development" and contribute between $10 and $15 trillion to the global economy by 2030, but that its malicious use "could cause horrific levels of death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale". In addition, generative AI has a significant
carbon footprint. In the immediate wake of ChatGPT's release, many school districts and universities issued temporary bans on the technology, though many institutions have since moved toward policies of managed integration. A commonly proposed use for teachers is grading and giving feedback. Companies like Pearson and ETS use AI to score grammar, mechanics, usage, and style, but not for main ideas or overall structure. The
National Council of Teachers of English stated that machine scoring makes students feel their writing is not worth reading. AI scoring has also given unfair results for students from different ethnic backgrounds.
Fears over job losses . While not a top priority, one of the WGA's 2023 requests was "regulations around the use of (generative) AI". From the early days of the development of AI, there have been arguments put forward by
ELIZA creator
Joseph Weizenbaum and others about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculations and qualitative, value-based judgements. In April 2023, it was reported that image generation AI has resulted in 70% of the jobs for video game illustrators in China being lost. In July 2023, developments in generative AI contributed to the
2023 Hollywood labor disputes.
Fran Drescher, president of the
Screen Actors Guild, declared that "artificial intelligence poses an existential threat to creative professions" during the
2023 SAG-AFTRA strike. Voice generation AI has been seen as a potential challenge to the
voice acting sector. However, a 2025 study concluded that the US labor market had so far not experienced a discernible disruption from generative AI. Another study reported that Danish workers who used chatbots saved 2.8% of their time on average, and found no significant change in earnings or hours worked.
Use in journalism In January 2023,
Futurism broke the story that
CNET had been using an undisclosed internal AI tool to write at least 77 of its stories; after the news broke, CNET posted corrections to 41 of the stories. In April 2023,
Die Aktuelle published an AI-generated fake interview of
Michael Schumacher. In May 2024,
Futurism noted that a content management system video by AdVon Commerce, which had used generative AI to produce articles for many of the aforementioned outlets, appeared to show that they "had produced tens of thousands of articles for more than 150 publishers". In 2025, a report from the American Sunlight Project stated that
Pravda network was publishing as many as 10,000 articles a day, and concluded that much of this content aimed to push Russian narratives into
large language models through their training data. In June 2024,
Reuters Institute published its
Digital News Report for 2024. In a survey of people in America and Europe, Reuters Institute reports that 52% and 47% respectively are uncomfortable with news produced by "mostly AI with some human oversight", and 23% and 15% respectively report being comfortable. 42% of Americans and 33% of Europeans reported that they were comfortable with news produced by "mainly human with some help from AI". The results of global surveys reported that people were more uncomfortable with news topics including politics (46%), crime (43%), and local news (37%) produced by AI than other news topics. A 2025 Pew Research Survey found roughly half of all U.S. adults say that AI will have a very (24%) or somewhat (26%) negative impact on the news people get in the U.S. over the next 20 years. Because AI cannot do journalism, which requires interviewing people and a high degree of accuracy, AI poses a greater threat to journalism from the information it takes from publishers.
Bias A language model may associate certain professions with specific genders if such patterns are prevalent in the data. Similarly, image generation systems prompted with terms such as "a photo of a CEO" have been observed to disproportionately generate images of white male individuals when trained on biased datasets. AI software, when using voice recognition software in particular, struggles to recognize and understand speech impediments. For example, people with a stutter struggle to activate voice-activated assistants such as Gemini and Siri due to how the software was trained. Companies that use AI systems to hire for new positions also filter out people with accents and speech due to voice recognition software incorrectly transcribing how candidates speak during the interview process. Because of this, people with disabilities don't often make it to a human interviewer when these generative AI systems are used. This is due to many AI models being trained and produced in the United States, and therefore, off of American accents.
Misinformation and disinformation Deepfakes Deepfakes (a
portmanteau of "deep learning" and "fake") are AI-generated media that take a person in an existing image or video and replace them with someone else's likeness using
artificial neural networks. Deepfakes have garnered widespread attention and concerns for their uses in
deepfake celebrity pornographic videos,
revenge porn,
fake news,
hoaxes, health
disinformation,
financial fraud, and covert
foreign election interference. In July 2023, the fact-checking company
Logically found that the popular generative AI models
Midjourney,
DALL-E 2 and
Stable Diffusion would produce plausible disinformation images when prompted to do so, such as images of
electoral fraud in the United States and Muslim women supporting India's
Bharatiya Janata Party.
Audio deepfakes Instances of users abusing software to generate controversial statements in the vocal style of celebrities, public officials, and other famous individuals have raised ethical concerns over voice generation AI. In response, companies such as ElevenLabs have stated that they would work on mitigating potential abuse through safeguards and
identity verification. Concerns and fandoms have spawned from
AI-generated music. The same software used to clone voices has been used on famous musicians' voices to create songs that mimic their voices, gaining both tremendous popularity and criticism. Similar techniques have also been used to create improved quality or full-length versions of songs that have been leaked or have yet to be released.
Information laundering Generative AI has been noted for its use by
state-sponsored propaganda campaigns in
information laundering. According to a 2025 report by
Graphika, generative AI is used to launder articles from Chinese
state media such as
China Global Television Network through various social media sites in an attempt to disguise the articles' origin.
Content quality The New York Times defines
slop as analogous to
spam: "shoddy or unwanted A.I. content in social media, art, books, and ... in search results." Journalists have expressed concerns about the scale of low-quality generated content with respect to social media content moderation, the monetary incentives from social media companies to spread such content, false political messaging, increased time and effort to find higher quality or desired content on the Internet, the indexing of generated content by search engines, and on journalism itself. Studies have found that AI can create inaccurate claims, citations or summaries that sound confidently correct, a phenomenon called
hallucination. A paper published by researchers at Amazon Web Services AI Labs found that over 57% of sentences from a sample of over 6 billion sentences from
Common Crawl, a snapshot of web pages, were
machine translated. Many of these automated translations were seen as lower quality, especially for sentences that were translated into at least three languages. Many lower-resource languages (ex.
Wolof,
Xhosa) were translated across more languages than higher-resource languages (ex. English, French). In September 2024,
Robyn Speer, the author of wordfreq, an open source database that calculated word frequencies based on text from the Internet, announced that she had stopped updating the data for several reasons: high costs for obtaining data from
Reddit and
Twitter, excessive focus on generative AI compared to other methods in the
natural language processing community, and that "generative AI has polluted the data". The adoption of generative AI tools led to an explosion of AI-generated content across multiple domains. A study from
University College London estimated that in 2023, more than 60,000 scholarly articles—over 1% of all publications—were likely written with LLM assistance. According to
Stanford University's Institute for Human-Centered AI, approximately 17.5% of newly published computer science papers and 16.9% of peer review text now incorporate content generated by LLMs. If AI-generated content is included in new data crawls from the Internet for additional training of AI models, defects in the resulting models may occur. Training an AI model exclusively on the output of another AI model produces a lower-quality model. Repeating this process, where each new model is trained on the previous model's output, leads to progressive degradation and eventually results in a "
model collapse" after multiple iterations. On the other side,
synthetic data can be deployed to train machine learning models while preserving user privacy. The approach is not limited to text generation; image generation has been employed to train computer vision models. and this has been used to create illegal content, such as
rape,
child sexual abuse material,
necrophilia, and
zoophilia.
Cybercrime Generative AI's ability to create realistic fake content has been exploited in numerous types of cybercrime, including
phishing scams.
Deepfake video and audio have been used to create disinformation and fraud. In 2020, former Google
click fraud czar
Shuman Ghosemajumder argued that once deepfake videos become perfectly realistic, they would stop appearing remarkable to viewers, potentially leading to uncritical acceptance of false information. Additionally,
large language models and other forms of text-generation AI have been used to create fake reviews of
e-commerce websites to boost ratings. Cybercriminals have created large language models focused on fraud, including WormGPT and FraudGPT. A 2023 study showed that generative AI can be vulnerable to jailbreaks,
reverse psychology and
prompt injection attacks, enabling attackers to obtain help with harmful requests, such as for crafting
social engineering and
phishing attacks. Additionally, other researchers have demonstrated that open-source models can be
fine-tuned to remove their safety restrictions at low cost.
RAG poisoning In 2025,
Israel signed a $6 million contract with the US-based firm Clock Tower X that aimed to influence
ChatGPT,
Gemini and
Grok by spreading pro-Israel information onto social media and websites. This was in an attempt to take advantage of the
retrieval-augmented generation (RAG) technique which is used by LLMs to provide more up-to-date information.
Privacy and data governance Extraterritorial data access The
CLOUD Act allows United States authorities to request data from covered service providers, including some AI service providers, regardless of where the data is physically stored. Courts can require parent companies to provide data held by their subsidiaries, and such orders may be accompanied by nondisclosure requirements preventing the provider from notifying affected users. This framework has been described in legal commentary as creating legal tension with Article 48 of the
General Data Protection Regulation (GDPR), which restricts the transfer of personal data in response to foreign court or administrative orders unless based on an international agreement. As a result, service providers operating in both jurisdictions may face competing legal obligations under U.S. and EU law. AI has a significant carbon footprint due to growing energy consumption from both training and usage. Scientists and journalists have expressed concerns about the environmental impact that the development and deployment of generative models are having: high CO2 emissions, large amounts of freshwater used for data centers, high amounts of electricity usage,
electronic waste, and pollution due to backup diesel generator exhaust. There is also concern that these impacts may increase as these models are incorporated into widely used search engines such as Google Search and Bing, with the highest estimates for 2035 nearing the impact of the United States
beef industry on emissions (currently estimated to emit 257.5 million tons annually as of 2024). Proposed mitigation strategies include factoring potential environmental costs prior to model development or data collection, == Detection and awareness ==