He then joined the faculty of the
University of California, Berkeley as a professor of computer science. From 2008 to 2011 he also held an appointment as adjunct professor of Neurological Surgery at the
University of California, San Francisco, where he pursued research in computational physiology and
intensive-care unit monitoring. includes contributions to
machine learning,
probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking,
computer vision, and inverse
reinforcement learning. In 2016, he founded the Center for Human-Compatible Artificial Intelligence at UC Berkeley, with co-
principal investigators
Pieter Abbeel, Anca Dragan,
Tom Griffiths,
Bart Selman,
Joseph Halpern,
Michael Wellman and Satinder Singh Baveja. Russell has published several hundred conference and journal articles as well as several books, including
The Use of Knowledge in Analogy and Induction and
Do the Right Thing: Studies in Limited Rationality (with Eric Wefald). He and
Peter Norvig were the authors of
Artificial Intelligence: A Modern Approach, a textbook used by over 1,500 universities in 135 countries. He is on the Scientific Advisory Board for the
Future of Life Institute and the advisory board of the
Centre for the Study of Existential Risk. In 2017 he collaborated with the
Future of Life Institute to produce a video,
Slaughterbots, about swarms of
drones assassinating political opponents, and presented this to a United Nations meeting about the
Convention on Certain Conventional Weapons. In 2018 he contributed an interview to the documentary
Do You Trust This Computer?. His book,
Human Compatible: Artificial Intelligence and the Problem of Control, was published by Viking on 8 October 2019. His work is aligned with
Human-Centered Artificial Intelligence themes. His former doctoral students include
Marie desJardins,
Eric Xing and
Shlomo Zilberstein. with lectures on "The Biggest Event in Human History", "AI in warfare", "AI in the economy" and "AI: A Future for Humans". In March 2023, Russell signed an
open letter from the
Future of Life Institute calling for "all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than
GPT-4". The letter has been signed by over 30,000 individuals, including AI researchers such as
Yoshua Bengio and
Gary Marcus. In a January 2025 article in
Newsweek, Russell wrote "In other words, the AGI race is a race towards the edge of a cliff." ==Awards and honors==