Here is where, at least as Allen sees it, the difficulty lies. For all of its technological dominance, the U.S. is actually somewhat laggard in marshaling government resources to execute an AI strategy. “The White House Office of Science and Technology Policy in the last quarter of the Obama administration,” he points out, “released three very high-quality reports looking at artificial intelligence technology and what its implications were for the economy, for the workforce, and for the future of research and development technology. Those reports were full of great recommendations. And then in January 2017, the Defense Innovation Board released another report, making recommendations for improving the technology strategy of the Department of Defense.”

But many of the most important recommendations in those reports, Allen notes, have gone unimplemented. In one “particularly frustrating” case — the creation of an AI institute within the Department of Defense — Allen points out that “the recommendation was actually picked up by China and implemented by China before it was implemented in the U.S.”

That is the problem writ small. But other structural forces widening this strategic gap between the U.S. and China are worth noting as well. “On the surveillance side,” Allen says, “China has been aggressively increasing the use of AI in its domestic surveillance apparatus. If you are a Chinese political dissident, and you go to a music concert where it's possible that a lot of people would get riled up and you might be able to make something political happen, facial recognition technology will be used to analyze the footage of the cameras and arrest you preemptively. So you cannot, as someone identified in the government's database of political dissidents, attend large gatherings. That's a technology that exists and is implemented already in China.”

This, of course, hearkens back to a divisive debate about security and liberty, effectiveness and transparency, that has energized a lot of the thinking around U.S. policy since 9/11. “In the case of artificial intelligence and digital technologies more generally,” says Alllen, “there is a question about whether upholding our ethical values about privacy and civil liberties must be trade-offs against performance. With AI, in many areas there is that trade-off. In the case of medical records, you can use AI to analyze them and reach some really interesting predictive conclusions that can help you inform decision-making about care provisioning, about the causes of disease, about the design of pharmaceuticals. But all of those things to some extent raise the question of violations of privacy. Are countries that are more willing to cross that line going to have an advantage in the economic and national security utilization of artificial intelligence technology? Wherever possible, we want to identify those cases where improving safety and ethical utilization increased performance. But clearly there are other areas where there are trade-offs. And we are never going to stop having to decide where our country falls in choosing between those trade-offs. That’s a debate that's going to be with us for the next hundred years.”

That said, Allen does see the need for the U.S. to refocus its strategic priorities on AI — particularly when it comes to the more pragmatic side of the question. “In the pure research and development sphere,” he says, “in terms of generating the next breakthrough in AI, the West has a lead. And that would be not just the United States, but also other states such as United Kingdom and Canada that have really strong AI talent pools.”

This is not the case in applied research and development, says Allen. “China has been far, far more aggressive in this field than the West has,” he notes. “Venture capital funding for Chinese AI startups has been higher in China than the United States for two years now. And what's interesting is that a lot of these startups are profitable. AI startups in the United States are often speculative investments. The companies are not currently profitable, but you hope that they will be in the future. Whereas in China, many AI startups are already making money by the time they get investments. And a lot of the money that they're making is coming from the domestic surveillance or national security apparatus. They're actually delivering value to the government, and a lot of the value they're delivering is connected to national security.

Cultural and legal differences on the question of privacy are at play here, Allen argues. “If you want to get tens of millions of medical records in the United States upon which to use AI algorithms,” he says, “that's a regulatory and privacy nightmare. Whereas in China, getting access to hundreds of millions of records is just a matter of being connected to the right government official who can open the right door for you.”

That does not, however, change the brute facts and the sobering conclusions we must draw from them. “China,” as Allen says, “has more strategic focus and funding on artificial intelligence than what I'm seeing coming out of the United States. Their national AI strategy calls for matching the West in AI capability by 2020, for leading the world by 2025, and for literally dominating the global AI industry by 2030. Smart people, including former Google CEO Eric Schmidt, have assessed that strategy and those goals as credible. I do not believe that U.S. leadership in this field is guaranteed in any sense. I would say, absent a major change in policy by the U.S. government, China's leadership is more likely than not.”

These macro issues are worrying enough in and of themselves. But there is one further point that Allen emphasized — about an application of AI that does not present a direct military threat but rather a political one. “I spend a lot of time worrying about the ability of AI technology to be used in forgery, propaganda, and strategic deception,” he told us. “It is relatively easy now if you have a recording of someone's voice, even if it’s not a terribly long recording, to run that through an AI system which can thereafter generate their voice speaking any audio that you can type. So you can imagine generating audio of a U.S. politician saying incredibly offensive things or confessing to a crime. And this is going to be an incredible challenge for global democratic discourse. For the past hundred-plus years, we have been in a situation where recording and authentication technology has had a durable technological advantage over forgery technology.”