Applications • GPT-3, specifically the
Codex model, was the basis for
GitHub Copilot, a code completion and generation software that can be used in various code editors and IDEs. • GPT-3 is used in certain
Microsoft products to translate conventional language into formal computer code. • GPT-3 has been used in CodexDB to generate query-specific code for
SQL processing. • GPT-3 has been used by
Jason Rohrer in a retro-themed chatbot project named "Project December", which is accessible online and allows users to converse with several AIs using GPT-3 technology. • GPT-3 was used by
The Guardian to write an article about AI being harmless to human beings. It was fed some ideas and produced eight different essays, which were ultimately merged into one article. • GPT-3 was used in
AI Dungeon, which generates text-based adventure games. Later it was replaced by a competing model after OpenAI changed their policy regarding generated content. • GPT-3 is used to aid in writing
copy and other marketing materials. • A 2022 study from
Drexel University suggested that GPT-3-based systems could be used to screen for early signs of
Alzheimer's disease.
Reviews • In a July 2020 review in
The New York Times,
Farhad Manjoo said that GPT-3's ability to generate computer code, poetry, and prose is not just "amazing", "spooky", and "humbling", but also "more than a little terrifying". •
Daily Nous presented a series of articles by nine philosophers on GPT-3. Australian philosopher
David Chalmers described GPT-3 as "one of the most interesting and important AI systems ever produced". • A review in
Wired said that GPT-3 was "provoking chills across
Silicon Valley". • The
National Law Review said that GPT-3 is an "impressive step in the larger process", with OpenAI and others finding "useful applications for all of this power" while continuing to "work toward a more
general intelligence". • An article in the
MIT Technology Review, co-written by Deep Learning critic
Gary Marcus, stated that GPT-3's "comprehension of the world is often seriously off, which means you can never really trust what it says." According to the authors, GPT-3 models relationships between words without having an
understanding of the meaning behind each word. • Jerome Pesenti, head of the Facebook AI lab, said GPT-3 is "unsafe," pointing to the
sexist,
racist and other biased and negative language generated by the system when it was asked to discuss Jews, women, black people, and the
Holocaust. • Nabla, a French start-up specializing in healthcare technology, tested GPT-3 as a medical
chatbot, though OpenAI itself warned against such use. As expected, GPT-3 showed several limitations. For example, while testing GPT-3 responses about mental health issues, the AI advised a simulated patient to commit suicide. •
Noam Chomsky expressed his skepticism about GPT-3's scientific value: "It's not a language model. It works just as well for impossible languages as for actual languages. It is therefore refuted, if intended as a language model, by normal scientific criteria. [...] Perhaps it's useful for some purpose, but it seems to tell us nothing about language or cognition generally." •
Luciano Floridi and
Massimo Chiriatti highlighted the risk of "cheap production of good, semantic artefacts". • OpenAI's Sam Altman himself criticized what he called "GPT-3 hype", acknowledging GPT-3 "has serious weakness and sometimes makes very silly mistakes... AI is going to change the world, but GPT-3 is just a very early glimpse."
Criticism GPT-3's builder,
OpenAI, was initially founded as a
non-profit in 2015. In 2019, OpenAI broke from its usual open-source standards by not publicly releasing
GPT-2, GPT-3's predecessor model, citing concerns that the model could facilitate the propagation of fake news. OpenAI eventually released a version that was 8% of the original model's size. In the same year, OpenAI restructured to be a for-profit company. In 2020, Microsoft announced the company had exclusive licensing of GPT-3 for Microsoft's products and services following a multi-billion dollar investment in OpenAI. The agreement permits OpenAI to offer a public-facing API such that users can send text to GPT-3 to receive the model's output, but only Microsoft will have access to GPT-3's source code. The growing use of automated writing technologies based on GPT-3 and other language generators, has raised concerns regarding academic integrity and raised the stakes of how universities and schools will gauge what constitutes academic misconduct such as plagiarism. OpenAI's GPT series was built with data from the
Common Crawl dataset, a conglomerate of copyrighted articles, internet posts, web pages, and books scraped from 60 million domains over a period of 12 years.
TechCrunch reports this training data includes copyrighted material from the BBC,
The New York Times,
Reddit, the full text of online books, and more. In its response to a 2019 Request for Comments on Intellectual Property Protection for Artificial Intelligence Innovation from the
United States Patent and Trademark Office (USPTO), OpenAI argued that "Under current law, training AI systems [such as its GPT models] constitutes
fair use," but that "given the lack of
case law on point, OpenAI and other AI developers like us face substantial legal uncertainty and compliance costs." == See also ==