He has addressed technical and societal challenges and opportunities with the fielding of AI technologies in the open world, AI safety and robustness, and where AI systems and capabilities can have inadvertent effects, pose dangers, or be misused. He has presented on caveats with applications of AI in military settings. He and
Thomas G. Dietterich called for work on
AI alignment, saying that AI systems "must reason about what people intend rather than carrying out commands literally." He and privacy scholar
Deirdre Mulligan stated that society must balance privacy concerns with benefits of data for social benefit. He has presented on the risks of AI-enabled deepfakes and contributed to media provenance technologies that cryptographically certify the source and history of edits of digital content.
Asilomar AI study He served as President of the
AAAI from 2007–2009. As AAAI President, he called together and co-chaired the Asilomar AI study which culminated in a meeting of AI scientists at
Asilomar in February 2009. The study considered the nature and timing of AI successes and reviewed concerns about directions with AI developments, including the potential loss of control over computer-based intelligences, and also efforts that could reduce concerns and enhance long-term societal outcomes. The study was the first meeting of AI scientists to address concerns about superintelligence and loss of control of AI and attracted interest by the public. In coverage of the Asilomar study, he said that scientists must study and respond to notions of superintelligent machines and concerns about artificial intelligence systems escaping from human control.
One Hundred Year Study on Artificial Intelligence In 2014, Horvitz defined and funded with his wife the One Hundred Year Study of Artificial Intelligence (AI100) at
Stanford University. In 2016, the AI Index was launched as a project of the One Hundred Year Study. According to Horvitz, the AI100 gift, which may increase in the future, is sufficient to fund the study for a century. Topics include abuses of AI that could pose threats to democracy and freedom and addressing possibilities of superintelligences and loss of control of AI. The One Hundred Year Study is overseen by a Standing Committee. The Standing Committee formulates questions and themes and organizes a Study Panel every five years. The Study Panel issues a report that assesses the status and rate of progress of AI technologies, challenges, and opportunities with regard to AI's influences on people and society. The 2015 study panel of the One Hundred Year Study, chaired by
Peter Stone, released a report in September 2016, titled
"Artificial Intelligence and Life in 2030". The panel advocated for increased public and private spending on the industry, recommended increased AI expertise at all levels of government, and recommended against blanket government regulation. Panel chair Peter Stone argues that AI won't automatically replace human workers, but rather, will supplement the workforce and create new jobs in tech maintenance. Stone stated that "it was a conscious decision not to give credence to this in the report." chaired by
Michael Littman, was published in 2021.
Founding of Partnership on AI He co-founded and has served as board chair of the
Partnership on AI, a non-profit organization bringing together Apple, Amazon, Facebook, Google, DeepMind, IBM, and Microsoft with representatives from civil society, academia, and non-profit R&D. The organization's website points at initiatives, including studies of risk scores in criminal justice, facial recognition systems, AI and economy, AI safety, AI and media integrity, and documentation of AI systems.
Microsoft Aether Committee He founded and chairs the Aether Committee at Microsoft, Microsoft's internal committee on the responsible development and fielding of AI technologies. He reported that the Aether Committee had made recommendations on and guided decisions that have influenced Microsoft's commercial AI efforts. In April 2020, Microsoft published content on principles, guidelines, and tools developed by the Aether Committee and its working groups, including teams focused on AI reliability and safety, bias and fairness, intelligibility and explanation, and human-AI collaboration. == Publications ==