Categorized | Security Issues, Technology, World

Not Your Grandfather’s Battlefield: Artificial Intelligence and the Military

The US military is on the cusp of a fundamental transformation in how it operates on the battlefield due, in large part, to advances in artificial intelligence (AI).   Although the use of AI by the military has drawn criticism — based mostly on concerns over robots potentially making life and death decisions — over the past decade, its use has taken on an air of inevitability.   In fact, the pace of change is only accelerating, restrained more by political considerations than technical problems. Here are three words everyone is familiar with that you’ll hear more and more often in the coming years.

DronesMost people are familiar with Predator.   A deadly unmanned aircraft feared by terrorists worldwide. Predator, however, is very slow, powered by an engine similar to one used on lawn mowers, and is monitored continuously from ground control stations — the so-called ‘man in the loop’.   The next generation will be stealthy and jet powered. What will set them apart most profoundly, however, will be their ability to think.

“What we’re seeing is a true revolution in military affairs,” Peter Singer, Strategist and Senior Fellow at New America, wrote in his best-selling book, Wired for War:

The introduction of unmanned systems to the battlefield doesn’t change simply how we fight, but for the first time changes who fights at the most fundamental level. It transforms the very agent of war, rather than just its capabilities.”

Today, both the US and the UK have developed a drone that can be given a target to attack then decide for itself the best course to take to reach that target, the best angle of attack, and the most efficient weapon to use. Furthermore, if obstacles are encountered, it has the ability to recognize the danger and alter its plan, dealing with both the obstacle and the intended target.   Currently there are no plans to take the man out of the loop, however, as the speed of warfare increases it will become necessary to allow drones to make more and more decisions on their own…including the decision to kill.

Lasers. Of course everyone is familiar with lasers. They’ve appeared in almost every sci-fi movie, book, comic, and tv show since the genre was created.   However, in real life, they haven’t lived up to their billing — until now. Technical advances are allowing smaller more powerful batteries to be positioned on warships, and, eventually, on fighters — Buck Rodgers, meet reality.

The US will have fully operational lasers on board a few warships capable of destroying drones or missiles and disabling small ships by 2020. By the late 2020’s lasers will be ubiquitous throughout the US fleet and tests are beginning into the feasibility of using them on the F-35 fighter.   Even here AI will play a role as many decisions are already left to computers once a battle begins — the difference will simply be replacing the missile with a laser.

The ramifications for warfare will be enormous, as lasers could gradually render most missiles obsolete. Even the newest Russian sea skimming missile currently under development, which can reportedly fly at speeds of 4,600 mph — making it virtually unstoppable with today’s missile defense systems — would be vulnerable.

Swarming. An old word with a new agent.   Rather than locusts, the military envisions mini-drones — 10, 20, even 100 — flying in a coordinated fashion overwhelming an adversary’s defenses.   Just as Predator shifted from being used strictly for surveillance to a ground attack role, so too will mini-drones as the military figures out new ways to take advantage of its asset.

Again, AI is making what was previously only possible in sci-fi, a reality. One of the most difficult issues facing scientists has been the ability to coordinate a group of mini-drones so they are all on the ‘same page,’ moving in unison and not colliding with one another — that hurdle has been overcome. But they will eventually do more than that. AI will allow the swarm to strategize, working as a team attacking from different angles and seeking out their own targets on the battlefield.   All the commander will have to do is hit the ‘launch’ button.

AI is advancing so rapidly some experts are sounding the alarm that technology is progressing faster than international standards are being put in place to define acceptable machine ‘behavior’.   So far allowing a drone to decide to launch an attack on its own has been a step too far for governments or the military. However, as people become accustomed to AI and it becomes more ubiquitous in everyday life — think driverless cars — that will likely change.

Critics of AI in warfare have raised increasingly vehement alarms, and there are calls for the UN to begin establishing a legal framework for its operational use.   The issues are complex, however. For instance, establishing who’s at fault when a machine inadvertently kills a human. Will it be the manufacturer? The machines programmer? The commander who placed the machine in the battle space? Issues such as that need to be addressed sooner rather than later since, as usually happens, technological progress tends to rapidly outpace legislator’s ability to create a framework for its use.

Tomorrow’s battlefield is here and AI is going to become increasingly critical to success as new technology forces ever faster response times. As the pace of warfare accelerates it will become increasingly necessary to allow machines to act independently or risk certain defeat.   Human’s are simply incapable of making decisions as quickly as machines and the decisions machines make, especially under pressure, tend to be of higher quality than those made by humans.

One major obstacle currently holding AI back is an inability to understand nuance — for instance, what level of force is required in a given situation, or whether or not a human is an actual threat.   As that and other obstacles are overcome, however, the temptation to give greater decision making power to machines will be difficult to resist.   Especially when it can be justified as a means of keeping one’s own troops out of harms way. Questions abound and — to date anyways — policy makers have been slow to respond. One thing is certain though — its definitely not your grandfather’s battlefield anymore.

Image: wikicommons

Avatar photo

About Rick Lavere

Rick LaVere is a graduate student at New York University studying international relations. He has been published previously in International Politics Reviews and Combat Aircraft Magazine.

Like Us On Facebook

Facebook Pagelike Widget

Donate to Global Politics

The team of academics and students who work at Global Politics do so on a voluntary basis. If you like our content please consider making a donation to help meet the increasingly high running costs of the site.

Privacy Policy

To learn more about how Global Politics uses cookies please refer to our Privacy Policy.