Degree of human control Three classifications of the degree of human control of autonomous weapon systems were laid out by
Bonnie Docherty in a 2012
Human Rights Watch report. • human-in-the-loop: a human must instigate the action of the weapon (in other words not fully autonomous). • human-on-the-loop: a human may abort an action. • human-out-of-the-loop: no human action is involved.
Standard used in US policy Department of Defense Directive 3000.09 states that "Autonomous … weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." However, as noted in the
Bulletin of the Atomic Scientists, the policy requires that autonomous weapon systems that kill people or use kinetic force, selecting and engaging targets without further human intervention, be certified as compliant with "appropriate levels" and other standards, not that such weapon systems cannot meet these standards and are therefore forbidden. "Semi-autonomous" hunter-killers that autonomously identify and attack targets do not even require certification. Deputy Defense Secretary
Robert O. Work said in 2016 that the Defense Department would "not delegate lethal authority to a machine to make a decision", but might need to reconsider this since "authoritarian regimes" may do so. In October 2016 President
Barack Obama stated that early in his career he was wary of a future in which a US president making use of
drone warfare could "carry on perpetual wars all over the world, and a lot of them covert, without any accountability or democratic debate". In the US, security-related AI has fallen under the purview of the National Security Commission on Artificial Intelligence since 2018. On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report outlining five principles for weaponized AI and making 12 recommendations for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. A major concern is how the report will be implemented.
Possible violations of ethics and international acts Stuart Russell, professor of computer science from
University of California, Berkeley stated the concern he has with LAWs is that his view is that it is unethical and inhumane. The main issue with this system is it is hard to distinguish between combatants and non-combatants. There is concern by some economists and legal scholars about whether LAWs would violate
International Humanitarian Law, especially the principle of distinction, which requires the ability to discriminate combatants from non-combatants, and the
principle of proportionality, which requires that damage to civilians be proportional to the military aim. This concern is often invoked as a reason to ban "killer robots" altogether - but it is doubtful that this concern can be an argument against LAWs that do not violate International Humanitarian Law. A 2021 report by the American
Congressional Research Service states that "there are no domestic or international legal prohibitions on the development of use of LAWs," although it acknowledges ongoing talks at the
UN Convention on Certain Conventional Weapons (CCW). LAWs are said by some to blur the boundaries of who is responsible for a particular killing. Thomas Simpson and Vincent Müller argue that they may make it easier to record who gave which command. Potential IHL violations by LAWs are – by definition – only applicable in conflict settings that involve the need to distinguish between combatants and civilians. As such, any conflict scenario devoid of civilians' presence – i.e. in space or the deep seas – would not run into the obstacles posed by IHL.
Campaigns to ban LAWs , protesting against a vote to authorize police use of deadly force robots The possibility of LAWs has generated significant debate, especially about the risk of "killer robots" roaming the earth - in the near or far future. The group
Campaign to Stop Killer Robots formed in 2013. In July 2015, over 1,000 experts in artificial intelligence signed a letter warning of the threat of an
artificial intelligence arms race and calling for a ban on
autonomous weapons. The letter was presented in
Buenos Aires at the 24th
International Joint Conference on Artificial Intelligence (IJCAI-15) and was co-signed by
Stephen Hawking,
Elon Musk,
Steve Wozniak,
Noam Chomsky,
Skype co-founder
Jaan Tallinn and
Google DeepMind co-founder
Demis Hassabis, among others. According to
PAX For Peace (one of the founding organisations of the Campaign to Stop Killer Robots), fully automated weapons (FAWs) will lower the threshold of going to war as soldiers are removed from the battlefield and the public is distanced from experiencing war, giving politicians and other decision-makers more space in deciding when and how to go to war. They warn that once deployed, FAWs will make democratic control of war more difficult, something that author of
Kill Decision (a novel on the topic) and IT specialist
Daniel Suarez also warned about: according to him it might recentralize power into very few hands by requiring very few people to go to war. The
Holy See has called for the international community to ban the use of LAWs on several occasions. In November 2018, Archbishop
Ivan Jurkovic, the permanent observer of the Holy See to the United Nations, stated that “In order to prevent an arms race and the increase of inequalities and instability, it is an imperative duty to act promptly: now is the time to prevent LAWs from becoming the reality of tomorrow’s warfare.” The Church worries that these weapons systems have the capability to irreversibly alter the nature of warfare, create detachment from human agency and put in question the humanity of societies. , the majority of governments represented at a UN meeting to discuss the matter favoured a ban on LAWs. A minority of governments, including those of Australia, Israel, Russia, the UK, and the US, opposed a ban. In December 2022, a vote of the
San Francisco Board of Supervisors to authorize
San Francisco Police Department use of LAWs drew national attention and protests. The Board reversed this vote in a subsequent meeting.
Regulation without banning A third approach focuses on regulating the use of autonomous weapon systems in lieu of a ban.
Military AI arms control will likely require the institutionalization of new international norms embodied in effective technical specifications combined with active monitoring and informal ('Track II') diplomacy by communities of experts, together with a legal and political verification process. In 2021, the United States
Department of Defense requested a dialogue with the
Chinese People's Liberation Army on AI-enabled autonomous weapons but was refused. Under the framework of the
Convention on Certain Conventional Weapons, states have discussed lethal autonomous weapon systems since 2014. In 2016, the treaty's states parties established an open-ended
Group of Governmental Experts on Lethal Autonomous Weapons Systems to continue those discussions. The discussions have addressed international humanitarian law, accountability, possible prohibitions and regulations, and the extent of human control required over AI-enabled weapons. A
summit of 60 countries was held in 2023 on the responsible use of AI in the military. On 22 December 2023, a
United Nations General Assembly resolution was adopted to support international discussion regarding concerns about LAWs. The vote was 152 in favor, four against, and 11 abstentions. == See also ==