Classical AI Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer
Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do". Their predictions were the inspiration for
Stanley Kubrick and
Arthur C. Clarke's fictional character
HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer
Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved". Several
classical AI projects, such as
Doug Lenat's
Cyc project (that began in 1984), and
Allen Newell's
Soar project, were directed at AGI. However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". In the early 1980s, Japan's
Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of
expert systems, both industry and government pumped money into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer[s]".
Narrow AI research In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as
speech recognition and
recommendation algorithms. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry. , development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years. At the turn of the century, many mainstream AI researchers However, even at the time, this was disputed. For example,
Stevan Harnad of Princeton University concluded his 1990 paper on the
symbol grounding hypothesis by stating: The expectation has often been voiced that "top-down" (symbolic) approaches to modeling cognition will somehow meet "bottom-up" (sensory) approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route (or vice versa) – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings (thereby merely reducing ourselves to the functional equivalent of a programmable computer).
Feasibility speculated in 1965 that "machines will be capable, within twenty years, of doing any work a man can do". This prediction failed to come true.
Microsoft co-founder
Paul Allen believed that such intelligence is unlikely in the 21st century because it would require "unforeseeable and fundamentally unpredictable breakthroughs" and a "scientifically deep understanding of cognition". Writing in
The Guardian, roboticist
Alan Winfield claimed in 2014 that the gulf between modern computing and human-level artificial intelligence is as wide as the gulf between current space flight and practical faster-than-light spaceflight. An additional challenge is the lack of clarity in defining what
intelligence entails. Does it require consciousness? Must it display the ability to set goals as well as pursue them? Is it purely a matter of scale such that if model sizes increase sufficiently, intelligence will emerge? Are facilities such as planning, reasoning, and causal understanding required? Does intelligence require explicitly replicating the brain and its specific faculties? Does it require emotions? Most AI researchers believe strong AI can be achieved in the future, but some thinkers, like
Hubert Dreyfus and
Roger Penrose, deny the possibility of achieving strong AI.
John McCarthy is among those who believe human-level AI will be accomplished, but that the present level of progress is such that a date cannot accurately be predicted. AI experts' views on the feasibility of AGI wax and wane. Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when they would be 50% confident AGI would arrive was 2040 to 2050, depending on the poll, with the mean being 2081. Of the experts, 16.5% answered with "never" when asked the same question, but with a 90% confidence instead. Further current AGI progress considerations can be found above
Tests for confirming human-level AGI. A report by Stuart Armstrong and Kaj Sotala of the
Machine Intelligence Research Institute found that "over [a] 60-year time frame there is a strong bias towards predicting the arrival of human-level AI as between 15 and 25 years from the time the prediction was made". They analyzed 95 predictions made between 1950 and 2012 on when human-level AI will come about. In 2023,
Microsoft researchers published a detailed evaluation of
GPT-4. They concluded: "Given the breadth and depth of GPT-4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system." Another study in 2023 reported that GPT-4 outperforms 99% of humans on the
Torrance tests of creative thinking.
Blaise Agüera y Arcas and
Peter Norvig wrote in 2023 the article "Artificial General Intelligence Is Already Here", arguing that
frontier models had already achieved a significant level of general intelligence. They wrote that reluctance to this view comes from four main reasons: a "healthy skepticism about metrics for AGI", an "ideological commitment to alternative AI theories or techniques", a "devotion to human (or biological) exceptionalism", or a "concern about the economic implications of AGI".
Timescales s still lack advanced reasoning and planning capabilities, but rapid progress is expected. Progress in artificial intelligence has historically gone through periods of rapid progress separated by periods when progress appeared to stop. Ending each hiatus were fundamental advances in hardware, software or both to create space for further progress. For example, the computer hardware available in the twentieth century was not sufficient to implement
deep learning, which requires large numbers of
GPU-enabled
CPUs. In the introduction to his 2006 book, Goertzel says that estimates of the time needed before a truly flexible AGI is built vary from 10 years to over a century. , the consensus in the AGI research community seemed to be that the timeline discussed by
Ray Kurzweil in 2005 in
The Singularity is Near (i.e. between 2015 and 2045) was plausible. Mainstream AI researchers have given a wide range of opinions on whether progress will be this rapid. A 2012 meta-analysis of 95 such opinions found a bias towards predicting that the onset of AGI would occur within 16–26 years for modern and historical predictions alike. That paper has been criticized for how it categorized opinions as expert or non-expert. In 2012,
Alex Krizhevsky,
Ilya Sutskever, and
Geoffrey Hinton developed a neural network called
AlexNet, which won the
ImageNet competition with a top-5 test error rate of 15.3%, significantly better than the second-best entry's rate of 26.3% (the traditional approach used a weighted sum of scores from different pre-defined classifiers). AlexNet was regarded as the initial ground-breaker of the current deep learning wave. In 2020,
OpenAI developed
GPT-3, a language model capable of performing many diverse tasks without specific training. According to
Gary Grossman in a
VentureBeat article, while there is consensus that GPT-3 is not an example of AGI, it is considered by some to be too advanced to be classified as a narrow AI system. In the same year, Jason Rohrer used his GPT-3 account to develop a chatbot, and provided a chatbot-developing platform called "Project December". OpenAI asked for changes to the chatbot to comply with their safety guidelines; Rohrer disconnected Project December from the GPT-3 API. In 2022, DeepMind developed
Gato, a "general-purpose" system capable of performing more than 600 different tasks. In 2023, AI researcher
Geoffrey Hinton stated that: He estimated in 2024 (with low confidence) that systems smarter than humans could appear within 5 to 20 years and stressed the attendant existential risks. In May 2023,
Demis Hassabis similarly said that "The progress in the last few years has been pretty incredible", and that he sees no reason why it would slow, expecting AGI within a decade or even a few years. In March 2024,
Nvidia's Chief Executive Officer (CEO),
Jensen Huang, stated his expectation that within five years, AI would be capable of passing any test at least as well as humans. In June 2024, the AI researcher
Leopold Aschenbrenner, a former
OpenAI employee, estimated AGI by 2027 to be "strikingly plausible". In September 2025, a review of surveys of scientists and industry experts from the last 15 years reported that most agreed that artificial general intelligence (AGI) will occur before the year 2100. A more recent analysis by AIMultiple reported that, "Current surveys of AI researchers are predicting AGI around 2040". == Whole brain emulation ==