The rise of artificial intelligence has occasioned a long and difficult philosophical debate — will this technology save humanity or destroy it? It calls back 20th-century fears about nuclear energy and digital computing, or the fears of earlier eras about artificial flight, electricity, and crossbows. But for all our public philosophizing on the issue of AI, there has been far less discussion of its immediate, practical implications.
One of these, of course, is its military applicability. There exists a substantial body of evidence already that AI could form the axis of the next great arms race. We spoke with Gregory Allen, of the Center for a New American Security, about what military AI is going to look like.
In understanding the state of the art here, Allen told us, it is important to get clear of a major myth. “There is a big and widely held misconception,” he said, “that we are waiting for some kind of breakthrough that will enable human-level artificial intelligence — i.e., we think that is when the military implications of AI occur.”
This could not be farther from the case, as it turns out. “The implications of artificial intelligence for national security are already with us,” Allen said. “There is no research breakthrough required for AI technology to be incredibly useful in the domain of military affairs and espionage affairs. Essentially, we already have off-the-shelf in the commercial world and the academic sphere powerful enough AI capabilities to develop really advanced weapons technologies and really advanced espionage technologies. We are not waiting for basic research and development progress. We are only waiting for applications development — the leveraging of existing technology and the adaptation of it to military and intelligence spheres.”
This should not be surprising, given that the way AI is used does not represent a radical break with the previous technological exploits of topline militaries around the world. “Automation,” says Allen, “has been a part of the military story for a long, long time. The first autopilot, which was invented in the 1920's, was explicitly developed with a military aircraft application in mind. What has changed more recently is the availability of large data sets, large amounts of computational power, and advanced machine-learning algorithms upon which to develop autonomous systems. That is true for the weaponry side and for the data analysis and cybersecurity sides of warfare and espionage.”
So what does this mean for military praxis over the medium term? Allen sees the first frontier of major impact as being military robotics — the use of AI to direct large swarms of expendable unmanned vehicles in spectacular attacks. “We should expect,” he says, “to see a greatly expanded capability of autonomous systems. Consider the Tomahawk cruise missile. This is a system that costs $1.5 million per shot; so in the attack on Syria in April of 2017, the United States launched 60 Tomahawk Cruise Missiles for a total cost of nearly $100 million dollars just for the munitions. And that was because the missiles are incredibly expensive to develop in terms of the aerospace technologies and the rocketry; the onboard flight computers and avionics that allow the Tomahawk Missile to deliver an explosive to a precise target within a range of one to three meters from hundreds of miles away are also costly. What we're seeing with commercial artificial intelligence is that suddenly these types of capabilities, which used to be restricted to advanced militaries, are suddenly available to a much broader range of actors. Commercial drone technology is not nearly as good as a Tomahawk Cruise Missile. It can't go hundreds of miles and it can't go multiples of the speed of sound. But it can go tens of miles, and it can deliver an explosive payload. And increasingly it can do so autonomously rather than requiring a remote pilot. And that's why ISIS, which is an insurgent group and not normally the sort of actor that we would think of as having air force-type capabilities, is suddenly making their extensive use of commercial drones that they weaponize by adding explosives to. They're using these drones for intelligence, surveillance, and reconnaissance capabilities. And they're using them for a crude version of precision strike capabilities.”
This rapid reduction in cost and production hurdles is not confined only to insurgent or non-state actors — it is going to change the way incumbents and states conduct their military affairs as well. “In the U.S.’s peer competitors, such as a Russia or a China,” Allen says, “AI presents itself as a disruptive innovation. Which is to say it offers a cheap and crude alternative to developing the high level of capabilities that the United States might want. So rather than developing an advanced nuclear-powered aircraft carrier, perhaps China can instead invest in low-cost technologies that might help make that aircraft carrier obsolete. Imagine millions of drones with explosive payloads swarming towards an aircraft carrier battle group. Given that the size of the swarm that you could deliver increases as drone technologies get cheaper and more capable every year and as advances in AI up their autonomy every year, the types of attacks that you could mount would be really, really interesting. And cheap.”
Allen cites Gill Pratt, a former DARPA big and a legend in the robotics and intelligent systems field, for a useful metaphor to understand the kind of transformation we will witness. “Pratt has said that he believes that advances in computer vision and artificial intelligence are likely to lead to a Cambrian explosion in robotics systems. He's specifically making an analogy to an era in the history of life on Earth in which the evolution of sight and intelligence led to an explosion in the diversity of life on Earth. I think we should expect to see the same in robotics, and I think also we should expect of those advances to come out of the commercial sector. Which has important implications for the balance of military power. The early stages of this are already with us. The United States and Russia and China are all sprinting in this direction. Russia has released a strategy in which they call for 30 percent of their combat power to be robotic in nature by the year 2030. So militaries are really moving very aggressively in this direction, and I don't expect that to slow down.”