Categorized | China, Israel, Russia, US, World

‘Killer Robots’ and the Third Revolution in Warfare

Iron Dome. Image by Israel Defense Forces.

Iron Dome. Image by Israel Defense Forces.

 

War will never again be the same. Autonomous weapons have nearly arrived, and so far nothing has been able to stop them. At the 2015 International Joint Conference on Artificial Intelligence, an open letter—signed by over three thousand of the world’s most relevant robotics experts, and endorsed by luminaries including Stephen Hawking, Nobel Laureate Frank Wilczek, and Elon Musk—declared that autonomous weapons, or killer robots as they are more informally called, represent the “third revolution in warfare, after gunpowder and nuclear arms.” Moreover, that “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades.”

This is the culmination of decades of advancements towards autonomy. Already, automated weapons (weapons programmed to repeatedly perform specific functions) have been proliferating, offering a glimpse into the future possibilities of autonomous weaponry. There are Samsung Techwin surveillance and security guard robots deployed by South Korea along the Korean Demilitarized Zone. The U.S. MK 15-Phalanx weapons system can detect, track, and engage threats such as anti-ship missiles and aircraft. Israel’s Iron Dome defense system automatically destroys incoming missiles after an operator’s approval. Israel’s Harpy weapon system can attack and destroy radar emitters.

Currently autonomous weapons, which can be broadly defined as weapons able to select and engage targets without human intervention, are being widely developed by the world’s foremost military powers including the United States, the United Kingdom, Israel, China, and Russia. In February 2016, Secretary of Defense Ash Carter explained that the United States is developing “self-driving boats which can network together to do all kinds of missions, from fleet defense to close-in surveillance, without putting sailors at risk.” Rear Admiral Robert Girrier said that “the technology is here” for autonomous and intelligent unmanned systems, and offers the United States the opportunity “to achieve supremacy at a lower cost.” U.S. Air Force General Paul Selva openly stated that “artificial intelligence can help us with a lot of things that make warfighting faster, that make warfighting more predictable, that allow us to mine all of the data we have about an opponent to make better operational decisions.”

Autonomous weapons can likely create battlefield conditions in the future in which far fewer soldiers will need to be deployed in life-threatening situations. They can be designed to act and react more quickly and perhaps with more precision than humans. From a practical standpoint, autonomous weapons would not need to be fed, they would not disobey orders or suffer trauma, and they could be mass-produced.

The development of autonomous weapons is not without opposition, however. Some are arguing that autonomous weapons will not possess the necessary ability to distinguish between combatants, surrendering combatants, and noncombatants as stipulated in Protocol I of the Geneva Conventions. That their use will further lower the threshold to launching military action (a level already reduced by the proliferation of drones), make warfare disproportionately asymmetrical, ignite an artificial intelligence arms race, create issues in accountability, and will be unequipped to express the compassion of which soldiers, as human beings, are capable. Meanwhile, the United Nations (UN) Report of the Special Rapporteur questioned whether there is sufficient legality in place to account for robots having “the power of life and death over human beings.”

Currently, Human Rights Watch, the Future of Life Institute, the International Committee for Robot Arms Control (ICRAC), the Women’s International League for Peace and Freedom, and the Campaign to Stop Killer Robots have all adopted official positions calling for various preemptive bans on the development and usage of autonomous weapons. Meanwhile, the United Nations Report of the Special Rapporteur advocated that the UN Human Rights Council call on all states to “declare and implement national moratoria on at least the testing, production, assembly, transfer, acquisition, deployment and use of Lethal Autonomous Robots (LARS) until such time as an internationally agreed upon framework on the future of LARs has been established.”

These measures have so far had limited effect. The military states pursuing autonomous weapons have not only given little to no indication that they are planning to halt their development, but states such as Russia and China have refrained from even discussing their programs at any length.

However, in a U.S. Defense Department directive titled “Autonomy in Weapons Systems,” the United States did implement a number of self-regulating policies. The directive stipulates that autonomous weapons cannot use lethal or kinetic force. Human-supervised autonomous weapons may not select humans as targets. And while human-operated semi-autonomous weapons are permitted to apply lethal force, the weapons must be designed so that if degradation or lost communication occurs, the system cannot then begin autonomously selecting targets. It is asserted more broadly that all American autonomous weapons will comply with the laws of war. To some, the United States is taking a positive, pragmatic, and humanitarian step forward. To others, they are not doing enough.

In the coming decade, much more will be revealed about the impact of autonomous weapons upon war and civilians. Activists who wish to see these weapons regulated will have to exert great effort to enact meaningful arms control in an increasingly complex international system.

Past efforts to regulate new weaponry have an uneven record of success. States have historically, albeit inconsistently, demonstrated respect for civilian protection and the fair treatment of combatants under international humanitarian law. But they have also been known to prioritize security, military, and strategic interests. In light of this, effective arms control for autonomous weapons can only be established if it is made possible for states to simultaneously fulfill their humanitarian obligations and their pragmatic objectives. This will be a very difficult balancing act to achieve.

Avatar photo

About Dylan Evans

Dylan is a Contributing Editor of Global Politics, and a student of International Relations at the University of St Andrews. His studies focus on Afghanistan, arms control, American foreign policy, and the liberal world order.

Like Us On Facebook

Facebook Pagelike Widget

Donate to Global Politics

The team of academics and students who work at Global Politics do so on a voluntary basis. If you like our content please consider making a donation to help meet the increasingly high running costs of the site.




Privacy Policy

To learn more about how Global Politics uses cookies please refer to our Privacy Policy.