Sunday, November 16, 2014

Implications Of Lethal Autonomous Weapons (LAWS) or “killer robots,” at the UN

A sentry robot freezes a hypothetical intruder by pointing
its machine gun during a test in Cheonan, South Korea,
on September 28 2006.

UN Proceedings this Month

UN: ‘Killer Robot’ Talks Go Forward, Slowly | Human Rights Watch
(Geneva) – Countries at an international conference on conventional weapons agreed on November 14, 2014, to further discuss concerns raised about fully autonomous weapons, or “killer robots,” Human Rights Watch said today. However, greater urgency is needed to address the threat these weapons pose.
The 118 nations that are part of the Convention on Conventional Weapons (CCW) agreed by consensus to reconvene at the United Nations in Geneva on April 13-17, 2015, to continue deliberations that started earlier in 2014 on questions relating to “lethal autonomous weapons systems.” These weapons have not yet been developed, but technology is moving rapidly toward increasing autonomy.
Remarks by Stephen Townley at the 69th United Nations General Assembly, Sixth Committee (Legal) Session on Agenda Item 79: Status of the Protocols Additional to the Geneva Conventions of 1949

 CNAS_AutonomousWeaponsUN_HorowitzScharreSayler.pdf

 Previous Discussions Elsewhere

Here come the autonomous robot security guards: What could possibly go wrong? | ExtremeTech

Elon Musk thinks robots could turn on us in next five years - UPI.com
After saying artificial intelligence in robots is "our biggest existential threat" and that we're risking "summoning a demon" during an interview with MIT, Musk has now accidentally made public his ideas of when the threat might materialize.
"The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most," Musk wrote in an email to publisher John Brockman. "Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don't understand."
Future Tech? Autonomous Killer Robots Are Already Here - NBC News.com
This week's talks in Geneva are the beginning of a debate over whether autonomous lethal systems will eventually be banned under the Convention on Certain Conventional Weapons (CCW).
Some of the robots are just prototypes. Others, like sentry robots installed by South Korea and Israel, have the ability to kill autonomously but don't. As the technology gets more advanced, some officials are worried that stories about "Terminator"-style robots might slow the adoption of machines that could save the lives of their soldiers.
It's not just governments that are interested in the technology that's developed in the pursuit of warrior robots. In December, Google bought Boston Dynamics, a company famous for building an array of frighteningly realistic robots for the research arm of the Department of Defense.

The Moral Implications Of Robots That Kill - Business Insider
As you can imagine, the killer robot issue is one that raises a number of concerns in the arenas of wartime strategy, morality, and philosophy. The hubbub is probably best summarized with this soundbite from The Washington Post: "Who is responsible when a fully autonomous robot kills an innocent? How can we allow a world where decisions over life and death are entirely mechanized?"
The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | WIRED
The robots are coming, and they’re getting smarter. They’re evolving from single-task devices like Roomba and its floor-mopping, pool-cleaning cousins into machines that can make their own decisions and autonomously navigate public spaces. Thanks to artificial intelligence, machines are getting better at understanding our speech and detecting and reflecting our emotions. In many ways, they’re becoming more like us.
Whether you find it exhilarating or terrifying (or both), progress in robotics and related fields like AI is raising new ethical quandaries and challenging legal codes that were created for a world in which a sharp line separates man from machine. Last week, roboticists, legal scholars, and other experts met at the University of California, Berkeley law school to talk through some of the social, moral, and legal hazards that are likely to arise as that line starts to blur.
 Should the world kill killer robots before it’s too late? - The Washington Post

On Tuesday in Geneva, the United Nations will convene a meeting on the use of "killer robots" -- lethal autonomous weapons that in theory could select targets and attack them without direct human mediation. To be clear, killer robots don't yet exist, but a host of countries are developing technology that could make them a reality in the not so distant future. Quite a few organizations and activists want to prevent that from ever happening.
What are these machines? An article in Foreign Affairs outlines the sort of technology that is moving us toward a future populated by killer robots:
The Samsung Techwin security surveillance guard robots, which South Korea uses in the demilitarized zone it shares with North Korea, can detect targets through infrared sensors. Although they are currently operated by humans, the robots have an automatic feature that can detect body heat in the demilitarized zone and fire with an onboard machine gun without the need for human operators. The U.S. firm Northrop Grumman has developed an autonomous drone, the X-47B, which can travel on a preprogrammed flight path while being monitored by a pilot on a ship. It is expected to enter active naval service by 2019. Israel, meanwhile, is developing an armed drone known as the Harop that could select targets on its own with a special sensor, after loitering in the skies for hours.
In Defense Of Killer Robots
The United Nations will soon gather to ponder the most critical question of our time: What do we do about the proliferation of lethal automatons — otherwise known as “killer robots.” These androids robots, yet to be invented, will have the ability to target people without “direct human mediation,” a technological advancement that is attracting the predictable pushback. As the Washington Post reports, groups like Human Rights Watch — which recently released a chilling report called “Shaking the Foundation: The Human Implications of Killer Robots” — and dignitaries, from Desmond Tutu to Lech Walesa, have signed a letter asserting that it is “unconscionable that human beings are expanding research and development of lethal machines  that would be able to kill people without human intervention.”
Why is that? Because human beings have a long history of making rational and ethical decisions when it comes to killing people?

Results of first session at the UN Geneva this past Spring

United Nations News Centre - UN meeting targets 'killer robots'
14 May 2014 – The top United Nations official in Geneva has urged bold action by diplomats at the start of the world body's first ever meeting on Lethal Autonomous Weapons (LAWS), better known as “killer robots,” telling them: “You have the opportunity to take pre-emptive action and ensure that the ultimate decision to end life remains firmly under human control.”
The remarks were made yesterday by Michael Møller, Acting Director-General of the United Nations Office at Geneva, at the opening session of the Convention on Certain Conventional Weapons (CCW) Meeting of Experts on Lethal Autonomous Weapons Systems taking place this week at the Palais des Nations in Geneva.
Ambassador Jean-Hugues Simon-Michel of France, who is chairing the four-day expert meeting, noted: “Lethal autonomous weapons systems are a challenging emerging issue on the disarmament agenda right now,”
The four days of discussions will focus on technological developments, the ethical and sociological questions that arise from the development and deployment of autonomous weapons, as well as the adequacy and legal challenges to international law and the possible impact on military operations, according to the UN Office for Disarmament Affairs (ODA).
UN Disarmament | Lethal Autonomous Weapons
The CCW Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS) took place from 13 to 16 May 2014 at the United Nations in Geneva.

At the 2013 CCW Meeting of High Contracting Parties, a new mandate on lethal autonomous weapons systems (LAWS) was agreed on. The mandate states:
"...Chairperson will convene in 2014 a four-day informal Meeting of Experts, from 13 to 16 May 2014, to discuss the questions related to emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposes of the Convention. He will, under his own responsibility, submit a report to the 2014 Meeting of the High Contracting Parties to the Convention, objectively reflecting the discussions held." The Meeting of Experts was chaired by Ambassador Jean-Hugues Simon-Michel of France. 
USA_Opening Statement MX_LAWS_2014.pdf


…  at the outset, we would like to make three framing points and then highlight one issue that is, to us, critical in thinking about autonomous features of weapons systems.
  1. To move toward a common understanding does not mean that we need to define "lethal autonomous weapons systems" at the outset.  ,,, we are here to discuss future weapons or, in the words of the mandate for this meeting, "emerging technologies."  Therefore we need to be clear, in these discussions we are not referring to remotely piloted aircraft, which as their name indicates are not autonomous and therefore, conceptually distinct from LAWS.
  2. Second, …  it is premature to determine where these discussions might or should lead.  In our view, … acknowledge the value in discussing lethal autonomous weapons systems in the CCW, a forum focused on inter-national  humanitarian law, which is the relevant framework for this discussion.
  3.  Third, we must bear in mind the complex and multifaceted nature of this issue.  ..  For instance, our discussion here will necessarily touch on the development of civilian technology, which we expect to continue unrestricted by those discussions.

,,, one of the key issues we think states should focus on in considering autonomy in weapons systems -- and that is risk.  e.g., how does the battlefield- whether cluttered or uncluttered- affect the risk of using a particular weapons system?

In order to assess risk associated with the use of any weapons system, …, after a comprehensive policy review, the United States Department of Defense issued DoD Directive 3000.09, "Autonomy  in Weapon Systems," in 2012.  …At this early stage, we cannot say, and, to reiterate, should not prejudge, where the discussion will lead, but we do recognize that it is a good time for this discussion to begin.

Stephen Townley
DoD Directive 3000.09, November 21, 2012 - 300009p.pdf
There's a lot of talk at the UN about how early this is, but I think it is too late to be early. The applicability and definition of autonomous lethal systems in this directive are limited. Fire and forget weapons are still in an ambiguous situation. Anyone who has ever been chased by a Mark 48 Torpedo, a homing AAM or SAM, might consider them "LAWS". The DoD Directive also excludes mines, which would seem to exclude a CAPTOR ASW mine, or smart antipersonnel delayed action munitions.

No comments: