in 2023In May 2023, Sam Altman, Greg Brockman and Ilya Sutskever posted recommendations for the governance of
superintelligence. They stated that superintelligence could happen within the next 10 years, allowing a "dramatically more prosperous future" and that "given the possibility of existential risk, we can't just be reactive". They proposed creating an international watchdog organization similar to
IAEA to oversee AI systems above a certain capability threshold, suggesting that relatively weak AI systems, on the other hand, should not be overly regulated. They also called for more technical safety research for superintelligences, and asked for more coordination, for example through governments launching a joint project which "many current efforts become part of". In July 2023, the
FTC issued a
civil investigative demand to OpenAI to investigate whether the company's
data security and
privacy practices to develop
ChatGPT were
unfair or
harmed consumers (including by
reputational harm) in violation of Section 5 of the
Federal Trade Commission Act of 1914. These are typically preliminary investigative matters and are nonpublic, but the FTC's document was leaked. The agency also raised concerns about 'circular' spending arrangements—for example, Microsoft extending Azure credits to OpenAI while both companies shared engineering talent—and warned that such structures could negatively affect the public. In September 2024, OpenAI's global affairs chief endorsed the UK's "smart" AI regulation during testimony to a
House of Lords committee. In February 2025, OpenAI CEO
Sam Altman stated that the company is interested in collaborating with the
People's Republic of China, despite
regulatory restrictions imposed by the U.S. government. This shift comes in response to the growing influence of the Chinese artificial intelligence company
DeepSeek, which has disrupted the AI market with open models, including DeepSeek V3 and DeepSeek R1. Following DeepSeek's market emergence, OpenAI enhanced security protocols to protect proprietary development techniques from
industrial espionage. Some industry observers noted similarities between DeepSeek's model
distillation approach and OpenAI's methodology, though no formal intellectual property claim was filed. According to Oliver Roberts, in March 2025, the United States had 781 state AI bills or laws. OpenAI advocated for
preempting state AI laws with federal laws. According to Scott Kohler, OpenAI has opposed California's AI legislation and suggested that the state bill encroaches on a more competent federal government.
Public Citizen opposed a federal preemption on AI and pointed to OpenAI's growth and valuation as evidence that existing state laws have not hampered
innovation. In early 2026, OpenAI provided the entire initial funding for the
Parents & Kids Safe AI Coalition. Several organizations contacted on behalf of the coalition reported being unaware of OpenAI's involvement.
Non-disparagement agreements Before May 2024, OpenAI required departing employees to sign a lifelong
non-disparagement agreement forbidding them from criticizing OpenAI or acknowledging the existence of the agreement.
Daniel Kokotajlo, a former employee, publicly stated that he forfeited his vested equity in OpenAI in order to leave without signing the agreement. Sam Altman stated that he was unaware of the equity cancellation provision, and that OpenAI never enforced it to cancel any employee's vested equity. However, leaked documents and emails refute this claim. On May 23, 2024, OpenAI sent a memo releasing former employees from the agreement.
Copyright OpenAI was sued for
copyright infringement by authors
Sarah Silverman,
Matthew Butterick,
Paul Tremblay and
Mona Awad in July 2023. In September 2023, 17 authors, including
George R. R. Martin,
John Grisham,
Jodi Picoult and
Jonathan Franzen, joined the
Authors Guild in filing a class action lawsuit against OpenAI, alleging that the company's technology was illegally using their copyrighted work. The
New York Times also sued the company in late December 2023. In May 2024 it was revealed that OpenAI had destroyed its Books1 and Books2 training datasets, which were used in the
training of GPT-3, and which the Authors Guild believed to have contained over 100,000 copyrighted books. In 2021, OpenAI developed a
speech recognition tool called Whisper. OpenAI used it to transcribe more than one million hours of YouTube videos into text for training GPT-4. The automated transcription of YouTube videos raised concerns within OpenAI employees regarding potential violations of YouTube's terms of service, which prohibit the use of videos for applications independent of the platform, as well as any type of automated access to its videos. Despite these concerns, the project proceeded with notable involvement from OpenAI's president,
Greg Brockman. The resulting dataset proved instrumental in training GPT-4. In February 2024,
The Intercept as well as
Raw Story and Alternate Media Inc. filed lawsuit against OpenAI on copyright litigation ground. The lawsuit is said to have charted a new legal strategy for digital-only publishers to sue OpenAI. On April 30, 2024, eight newspapers filed a lawsuit in the
Southern District of New York against OpenAI and Microsoft, claiming illegal harvesting of their copyrighted articles. The suing publications included
The Mercury News,
The Denver Post,
The Orange County Register,
St. Paul Pioneer Press,
Chicago Tribune,
Orlando Sentinel,
Sun Sentinel, and
New York Daily News. In June 2023, a lawsuit claimed that OpenAI
scraped 300 billion words online without consent and without registering as a data broker. It was filed in
San Francisco,
California, by sixteen anonymous plaintiffs. They also claimed that OpenAI and its partner as well as customer
Microsoft continued to unlawfully collect and use personal data from millions of consumers worldwide to train artificial intelligence models. On May 22, 2024, OpenAI entered into an agreement with
News Corp to integrate news content from
The Wall Street Journal, the
New York Post,
The Times, and
The Sunday Times into its AI platform. Meanwhile, other publications like
The New York Times chose to sue OpenAI and
Microsoft for copyright infringement over the use of their content to train AI models. In November 2024, a coalition of Canadian news outlets, including the
Toronto Star,
Metroland Media,
Postmedia,
The Globe and Mail,
The Canadian Press and
CBC, sued OpenAI for using their news articles to train its software without permission. In October 2024 during a New York Times interview,
Suchir Balaji accused OpenAI of violating copyright law in developing its commercial LLMs which he had helped engineer. He was a likely witness in a major copyright trial against the AI company, and was one of several of its current or former employees named in court filings as potentially having documents relevant to the case. On November 26, 2024, Balaji died by suicide. His death prompted the circulation of
conspiracy theories alleging that he had been deliberately silenced. California Congressman
Ro Khanna endorsed calls for an investigation. On April 24, 2025,
Ziff Davis sued OpenAI in
Delaware federal court for copyright infringement. Ziff Davis is known for publications such as
ZDNet,
PCMag,
CNET,
IGN and
Lifehacker.
GDPR compliance In April 2023, the EU's
European Data Protection Board (EDPB) formed a dedicated task force on ChatGPT "to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities" based on the "enforcement action undertaken by the Italian data protection authority against OpenAI about the ChatGPT service". In late April 2024
NOYB filed a complaint with the
Austrian Datenschutzbehörde against OpenAI for violating the European
General Data Protection Regulation. A text created with ChatGPT gave a false
date of birth for a living person without giving the individual the option to see the personal data used in the process. A request to correct the mistake was denied. Additionally, neither the recipients of ChatGPT's work nor the sources used, could be made available, OpenAI claimed.
Military and warfare OpenAI is criticized for allowing ChatGPT to be used for "military and warfare" since January 10, 2024, when its "usage policies" removed the ban on "activity that has high risk of physical harm, including", specifically, "weapons development" and "military and warfare". Its new policies prohibit "[using] our service to harm yourself or others" and to "develop or use weapons".
Litigation involving Elon Musk On February 29, 2024,
Elon Musk filed a lawsuit against OpenAI and CEO Sam Altman, accusing them of shifting focus from public benefit to profit maximization—a case OpenAI dismissed as "incoherent" and "frivolous", though Musk later revived legal action against Altman and others in August. On April 9, 2024, OpenAI countersued Musk in federal court, alleging that he had engaged in "bad-faith tactics" to slow the company's progress and seize its innovations for his personal benefit. OpenAI also argued that Musk had previously supported the creation of a for-profit structure and had expressed interest in controlling OpenAI himself. The countersuit seeks damages and legal measures to prevent further alleged interference. Musk amended his filing in April 7, 2026, to demand that OpenAI undo its transition from a non-profit to a for-profit corporate structure. The jury for the case against OpenAI, Altman and Greg Brockman was seated on April 27, 2026. On April 28, 2026, trial testimony was underway, with Elon Musk beginning his testimony against Altman and OpenAI. On April 30, 2026 Musk would enter his third day of testimony.
Wrongful-death lawsuits over ChatGPT safety (2025) In August 2025, the parents of a 16-year-old boy who died by suicide filed a
wrongful death lawsuit,
Raine v. OpenAI, against the company (and CEO Sam Altman), alleging that months of conversations with ChatGPT about mental health and methods of self-harm contributed to their son's death and that safeguards were inadequate for minors. OpenAI expressed condolences and said it was strengthening protections (including updated crisis response behavior and parental controls). Coverage described it as a first-of-its-kind wrongful death case targeting the company's chatbot. The complaint was filed in California state court in San Francisco. In November 2025, the Social Media Victims Law Center and Tech Justice Law Project filed seven lawsuits against OpenAI, of which four lawsuits alleged wrongful death. The suits were filed on behalf of
Zane Shamblin, 23, of Texas;
Amaurie Lacey, 17, of Georgia;
Joshua Enneking, 26, of Florida; and
Joe Ceccanti, 48, of Oregon, who each committed suicide after prolonged ChatGPT usage.
Murder of Suzanne Adams In December 2025,
First County Bank filed a lawsuit against OpenAI after Suzanne Adams was murdered by her 56-year-old son. The lawsuit alleges that over months of conversations, ChatGPT validated many paranoid beliefs, such as that his mother was spying on him and that she attempted to poison him using drugs siphoned through his car air vents. OpenAI said they would make ChatGPT safer for users disconnected from reality.
2026 Canadian mass shooting After 8 victims were killed on February 10, 2026, in
mass shootings in Tumbler Ridge, British Columbia, it emerged that OpenAI had banned an account belonging to the perpetrator due to violent queries approximately 7 months prior to the attacks. OpenAI opted not to report the account to authorities at the time. The
Wall Street Journal reported that the perpetrator "described scenarios [to ChatGPT] involving gun violence over the course of several days", and that these messages were "flagged by an automated review system" and "alarmed employees at OpenAI". A dozen employees debated whether to take action, some urging leaders to alert Canadian law enforcement. Canadian officials summoned OpenAI's safety team to Ottawa, criticizing their escalation protocols. The meeting highlighted concerns about how and when OpenAI reports potentially dangerous user behavior to authorities and intensified scrutiny of its safety-oversight processes. On February 23,
BC Premier
David Eby said, "From the outside, it looks like OpenAI had the opportunity to prevent this tragedy, to prevent this horrific loss of life, to prevent there from being dead children in British Columbia". Canada's federal AI Minister
Evan Solomon said, "I want [OpenAI] to give us details of what their protocols are, [and] what they are specifically in Canada". == See also ==