- Octavian Report - https://octavianreport.com -

Greg Allen on the AI arms race

The rise of artificial intelligence has occasioned a long and difficult philosophical debate — will this technology save humanity or destroy it? It calls back 20th-century fears about nuclear energy and digital computing, or the fears of earlier eras about artificial flight, electricity, and crossbows. But for all our public philosophizing on the issue of AI, there has been far less discussion of its immediate, practical implications.

One of these, of course, is its military applicability. There exists a substantial body of evidence already that AI could form the axis of the next great arms race. We spoke with Gregory Allen, of the Center for a New American Security, about what military AI is going to look like.

In understanding the state of the art here, Allen told us, it is important to get clear of a major myth. “There is a big and widely held misconception,” he said, “that we are waiting for some kind of breakthrough that will enable human-level artificial intelligence — i.e., we think that is when the military implications of AI occur.”

This could not be farther from the case, as it turns out. “The implications of artificial intelligence for national security are already with us,” Allen said. “There is no research breakthrough required for AI technology to be incredibly useful in the domain of military affairs and espionage affairs. Essentially, we already have off-the-shelf in the commercial world and the academic sphere powerful enough AI capabilities to develop really advanced weapons technologies and really advanced espionage technologies. We are not waiting for basic research and development progress. We are only waiting for applications development — the leveraging of existing technology and the adaptation of it to military and intelligence spheres.”

This should not be surprising, given that the way AI is used does not represent a radical break with the previous technological exploits of topline militaries around the world. “Automation,” says Allen, “has been a part of the military story for a long, long time. The first autopilot, which was invented in the 1920’s, was explicitly developed with a military aircraft application in mind. What has changed more recently is the availability of large data sets, large amounts of computational power, and advanced machine-learning algorithms upon which to develop autonomous systems. That is true for the weaponry side and for the data analysis and cybersecurity sides of warfare and espionage.”

So what does this mean for military praxis over the medium term? Allen sees the first frontier of major impact as being military robotics — the use of AI to direct large swarms of expendable unmanned vehicles in spectacular attacks. “We should expect,” he says, “to see a greatly expanded capability of autonomous systems. Consider the Tomahawk cruise missile. This is a system that costs $1.5 million per shot; so in the attack on Syria in April of 2017, the United States launched 60 Tomahawk Cruise Missiles for a total cost of nearly $100 million dollars just for the munitions. And that was because the missiles are incredibly expensive to develop in terms of the aerospace technologies and the rocketry; the onboard flight computers and avionics that allow the Tomahawk Missile to deliver an explosive to a precise target within a range of one to three meters from hundreds of miles away are also costly. What we’re seeing with commercial artificial intelligence is that suddenly these types of capabilities, which used to be restricted to advanced militaries, are suddenly available to a much broader range of actors. Commercial drone technology is not nearly as good as a Tomahawk Cruise Missile. It can’t go hundreds of miles and it can’t go multiples of the speed of sound. But it can go tens of miles, and it can deliver an explosive payload. And increasingly it can do so autonomously rather than requiring a remote pilot. And that’s why ISIS, which is an insurgent group and not normally the sort of actor that we would think of as having air force-type capabilities, is suddenly making their extensive use of commercial drones that they weaponize by adding explosives to. They’re using these drones for intelligence, surveillance, and reconnaissance capabilities. And they’re using them for a crude version of precision strike capabilities.”

This rapid reduction in cost and production hurdles is not confined only to insurgent or non-state actors — it is going to change the way incumbents and states conduct their military affairs as well. “In the U.S.’s peer competitors, such as a Russia or a China,” Allen says, “AI presents itself as a disruptive innovation. Which is to say it offers a cheap and crude alternative to developing the high level of capabilities that the United States might want. So rather than developing an advanced nuclear-powered aircraft carrier, perhaps China can instead invest in low-cost technologies that might help make that aircraft carrier obsolete. Imagine millions of drones with explosive payloads swarming towards an aircraft carrier battle group. Given that the size of the swarm that you could deliver increases as drone technologies get cheaper and more capable every year and as advances in AI up their autonomy every year, the types of attacks that you could mount would be really, really interesting. And cheap.”

Allen cites Gill Pratt, a former DARPA big and a legend in the robotics and intelligent systems field, for a useful metaphor to understand the kind of transformation we will witness. “Pratt has said that he believes that advances in computer vision and artificial intelligence are likely to lead to a Cambrian explosion in robotics systems. He’s specifically making an analogy to an era in the history of life on Earth in which the evolution of sight and intelligence led to an explosion in the diversity of life on Earth. I think we should expect to see the same in robotics, and I think also we should expect of those advances to come out of the commercial sector. Which has important implications for the balance of military power. The early stages of this are already with us. The United States and Russia and China are all sprinting in this direction. Russia has released a strategy in which they call for 30 percent of their combat power to be robotic in nature by the year 2030. So militaries are really moving very aggressively in this direction, and I don’t expect that to slow down.”

Here is where, at least as Allen sees it, the difficulty lies. For all of its technological dominance, the U.S. is actually somewhat laggard in marshaling government resources to execute an AI strategy. “The White House Office of Science and Technology Policy in the last quarter of the Obama administration,” he points out, “released three very high-quality reports looking at artificial intelligence technology and what its implications were for the economy, for the workforce, and for the future of research and development technology. Those reports were full of great recommendations. And then in January 2017, the Defense Innovation Board released another report, making recommendations for improving the technology strategy of the Department of Defense.”

But many of the most important recommendations in those reports, Allen notes, have gone unimplemented. In one “particularly frustrating” case — the creation of an AI institute within the Department of Defense — Allen points out that “the recommendation was actually picked up by China and implemented by China before it was implemented in the U.S.”

That is the problem writ small. But other structural forces widening this strategic gap between the U.S. and China are worth noting as well. “On the surveillance side,” Allen says, “China has been aggressively increasing the use of AI in its domestic surveillance apparatus. If you are a Chinese political dissident, and you go to a music concert where it’s possible that a lot of people would get riled up and you might be able to make something political happen, facial recognition technology will be used to analyze the footage of the cameras and arrest you preemptively. So you cannot, as someone identified in the government’s database of political dissidents, attend large gatherings. That’s a technology that exists and is implemented already in China.”

This, of course, hearkens back to a divisive debate about security and liberty, effectiveness and transparency, that has energized a lot of the thinking around U.S. policy since 9/11. “In the case of artificial intelligence and digital technologies more generally,” says Alllen, “there is a question about whether upholding our ethical values about privacy and civil liberties must be trade-offs against performance. With AI, in many areas there is that trade-off. In the case of medical records, you can use AI to analyze them and reach some really interesting predictive conclusions that can help you inform decision-making about care provisioning, about the causes of disease, about the design of pharmaceuticals. But all of those things to some extent raise the question of violations of privacy. Are countries that are more willing to cross that line going to have an advantage in the economic and national security utilization of artificial intelligence technology? Wherever possible, we want to identify those cases where improving safety and ethical utilization increased performance. But clearly there are other areas where there are trade-offs. And we are never going to stop having to decide where our country falls in choosing between those trade-offs. That’s a debate that’s going to be with us for the next hundred years.”

That said, Allen does see the need for the U.S. to refocus its strategic priorities on AI — particularly when it comes to the more pragmatic side of the question. “In the pure research and development sphere,” he says, “in terms of generating the next breakthrough in AI, the West has a lead. And that would be not just the United States, but also other states such as United Kingdom and Canada that have really strong AI talent pools.”

This is not the case in applied research and development, says Allen. “China has been far, far more aggressive in this field than the West has,” he notes. “Venture capital funding for Chinese AI startups has been higher in China than the United States for two years now. And what’s interesting is that a lot of these startups are profitable. AI startups in the United States are often speculative investments. The companies are not currently profitable, but you hope that they will be in the future. Whereas in China, many AI startups are already making money by the time they get investments. And a lot of the money that they’re making is coming from the domestic surveillance or national security apparatus. They’re actually delivering value to the government, and a lot of the value they’re delivering is connected to national security.

Cultural and legal differences on the question of privacy are at play here, Allen argues. “If you want to get tens of millions of medical records in the United States upon which to use AI algorithms,” he says, “that’s a regulatory and privacy nightmare. Whereas in China, getting access to hundreds of millions of records is just a matter of being connected to the right government official who can open the right door for you.”

That does not, however, change the brute facts and the sobering conclusions we must draw from them. “China,” as Allen says, “has more strategic focus and funding on artificial intelligence than what I’m seeing coming out of the United States. Their national AI strategy calls for matching the West in AI capability by 2020, for leading the world by 2025, and for literally dominating the global AI industry by 2030. Smart people, including former Google CEO Eric Schmidt, have assessed that strategy and those goals as credible. I do not believe that U.S. leadership in this field is guaranteed in any sense. I would say, absent a major change in policy by the U.S. government, China’s leadership is more likely than not.”

These macro issues are worrying enough in and of themselves. But there is one further point that Allen emphasized — about an application of AI that does not present a direct military threat but rather a political one. “I spend a lot of time worrying about the ability of AI technology to be used in forgery, propaganda, and strategic deception,” he told us. “It is relatively easy now if you have a recording of someone’s voice, even if it’s not a terribly long recording, to run that through an AI system which can thereafter generate their voice speaking any audio that you can type. So you can imagine generating audio of a U.S. politician saying incredibly offensive things or confessing to a crime. And this is going to be an incredible challenge for global democratic discourse. For the past hundred-plus years, we have been in a situation where recording and authentication technology has had a durable technological advantage over forgery technology.”

One thing is certain. The U.S. needs to take a hard look at its AI posture. The challenges the technology presents are manifold. Given the current state of our democratic discourse and the murk surrounding our long-germ geostrategy, this is an issue that demands action in the immediate present — not in the nebulous future our techno-prophets like to expatiate upon.