The thesis that AI could pose an existential risk provokes a wide range of reactions in the scientific community and in the public at large, but many of the opposing viewpoints share common ground. Observers tend to agree that AI has significant potential to improve society. also agree in principle that "There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities" and "Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources." Conversely, many skeptics agree that ongoing research into the implications of artificial general intelligence is valuable. Skeptic
Martin Ford has said: "I think it seems wise to apply something like
Dick Cheney's famous '1 Percent Doctrine' to the specter of advanced artificial intelligence: the odds of its occurrence, at least in the foreseeable future, may be very low—but the implications are so dramatic that it should be taken seriously". Similarly, an otherwise skeptical
Economist magazine wrote in 2014 that "the implications of introducing a second intelligent species onto Earth are far-reaching enough to deserve hard thinking, even if the prospect seems remote". Toby Ord wrote that the idea that an
AI takeover requires robots is a misconception, arguing that the ability to spread content through the internet is more dangerous, and that the most destructive people in history stood out by their ability to convince, not their physical strength. In September 2024, the
International Institute for Management Development launched an AI Safety Clock to gauge the likelihood of AI-caused disaster, beginning at 29 minutes to midnight. By February 2025, it stood at 24 minutes to midnight. By September 2025, it stood at 20 minutes to midnight. As of March 2026, it stood at 18 minutes to midnight.
Endorsement The thesis that AI poses an existential risk, and that this risk needs much more attention than it currently gets, has been endorsed by many computer scientists and public figures, including
Alan Turing, the most-cited computer scientist
Geoffrey Hinton,
Elon Musk,
Bill Gates, and
Stephen Hawking. and Hawking criticized widespread indifference in his 2014 editorial: Concern over risk from artificial intelligence has led to some high-profile donations and investments. In 2015,
Peter Thiel,
Amazon Web Services, Musk, and others jointly committed $1 billion to
OpenAI, consisting of a for-profit corporation and the nonprofit parent company, which says it aims to champion responsible AI development. Facebook co-founder
Dustin Moskovitz has funded and seeded multiple labs working on AI Alignment, notably $5.5 million in 2016 to launch the
Centre for Human-Compatible AI led by Professor
Stuart Russell. In January 2015,
Elon Musk donated $10 million to the
Future of Life Institute to fund research on understanding AI decision making. The institute's goal is to "grow wisdom with which we manage" the growing power of technology. Musk also funds companies developing artificial intelligence such as
DeepMind and
Vicarious to "just keep an eye on what's going on with artificial intelligence, saying "I think there is potentially a dangerous outcome there." In early statements on the topic,
Geoffrey Hinton, a major pioneer of
deep learning, noted that "there is not a good track record of less intelligent things controlling things of greater intelligence", but said he continued his research because "the prospect of discovery is too
sweet". In 2023 Hinton quit his job at Google in order to speak out about existential risk from AI. He explained that his increased concern was driven by concerns that superhuman AI might be closer than he previously believed, saying: "I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that." He also remarked, "Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. That's scary." In his 2020 book
The Precipice: Existential Risk and the Future of Humanity, Toby Ord, a Senior Research Fellow at Oxford University's
Future of Humanity Institute, estimates the total existential risk from unaligned AI over the next 100 years at about one in ten.
Skepticism Baidu Vice President
Andrew Ng said in 2015 that AI existential risk is "like worrying about overpopulation on Mars when we have not even set foot on the planet yet." For the danger of uncontrolled advanced AI to be realized, the hypothetical AI may have to overpower or outthink any human, which some experts argue is a possibility far enough in the future to not be worth researching. Skeptics who believe AGI is not a short-term possibility often argue that concern about existential risk from AI is unhelpful because it could distract people from more immediate concerns about AI's impact, because it could lead to government regulation or make it more difficult to fund AI research, or because it could damage the field's reputation. AI and AI ethics researchers
Timnit Gebru,
Emily M. Bender,
Margaret Mitchell, and Angelina McMillan-Major have argued that discussion of existential risk distracts from the immediate, ongoing harms from AI taking place today, such as data theft, worker exploitation, bias, and concentration of power. They further note the association between those warning of existential risk and
longtermism, which they describe as a "dangerous ideology" for its unscientific and utopian nature.
Wired editor
Kevin Kelly argues that natural intelligence is more nuanced than AGI proponents believe, and that intelligence alone is not enough to achieve major scientific and societal breakthroughs. He argues that intelligence consists of many dimensions that are not well understood, and that conceptions of an 'intelligence ladder' are misleading. He notes the crucial role real-world experiments play in the scientific method, and that intelligence alone is no substitute for these.
Meta chief AI scientist
Yann LeCun says that AI can be made safe via continuous and iterative refinement, similar to what happened in the past with cars or rockets, and that AI will have no desire to take control. Several skeptics emphasize the potential near-term benefits of AI. Meta CEO
Mark Zuckerberg believes AI will "unlock a huge amount of positive things", such as curing disease and increasing the safety of autonomous cars.
Public surveys An April 2023
YouGov poll of US adults found 46% of respondents were "somewhat concerned" or "very concerned" about "the possibility that AI will cause the end of the human race on Earth", compared with 40% who were "not very concerned" or "not at all concerned." According to an August 2023 survey by the Pew Research Centers, 52% of Americans felt more concerned than excited about new AI developments; nearly a third felt as equally concerned and excited. More Americans saw that AI would have a more helpful than hurtful impact on several areas, from healthcare and vehicle safety to product search and customer service. The main exception is privacy: 53% of Americans believe AI will lead to higher exposure of their personal information. == Mitigation ==