Myth #1: The US needs to pursue AI-enabled autonomy because China is doing it 

There is a Saturday Night Live sketch in which host, Simu Li, plays a senior US Army officer, poised to reveal a top-secret project to his civilian superiors. Behold, Dog-Head-Man. The mystery machine was just a golden retriever’s head and a cast-members arms. Hilarity ensued as the human cast member’s arms followed the script and the dog absolutely did not. The sketch reaches its punch line as the civilians tell the Army officer to shut the program down.  

The Army officer responds, “I guess we’ll close the project down and let China take the lead in Dog-Head-Man soldiers.”  

“Woah, woah, woah,” one of the civilians responds, “China is working on one of these? Well, why didn’t you say so? Get mass production started immediately!” 

The satirical jab lands because this argument does arise in national security circles: China is pursuing AI-enabled autonomous military systems, so the argument goes, and therefore so should we. But this is the wrong way to think about the operational factors that are driving the US’s investment in autonomy. 

The development of military capabilities takes place in a move, counter-move pattern. One side develops a capability, then the other side develops a capability to defend against it. As the US’s potential adversaries develop autonomous or “uncrewed” military systems, that drives the US, not to create autonomous systems, but to develop capabilities to counter uncrewed systems. Dr. Heidi Shyu, the Undersecretary of Defense for Research and Engineering, put it this way, “You are already seeing the increase in number of … unmanned systems. … With that increasingly growing, exponentially worldwide, the obvious thing we have to think about is how are we going to counter that. Because we already have unmanned systems that are intruding into our installations.” 

So, we are left to ask, if responding to the People’s Liberation Army’s development of AI-enabled weapons systems isn’t driving US investment in autonomy, what is?  

The PLA’s investments in more traditional weapons systems capable of threatening the US’s traditionally crewed systems is driving the US to develop autonomous systems. Unlike any adversary the US has faced since the Vietnam era, the PLA has both the military capability and capacity to hold at risk US military systems—especially those with Airmen, Sailors, Soldiers, and Marines in them. As the former Chief of Staff of the US Air Force (and current Chairman of the Joint Chiefs), Gen C. Q. Brown has put it, “tomorrow’s Airmen are more likely to fight in highly contested environments, and must be prepared to fight through combat attrition rates and risks to the Nation that are more akin to the World War II era than the uncontested environment to which we have since become accustomed.” 

As the People’s Liberation Army is able increasingly to hold at risk the US military’s high-end (and human-occupied) military systems, the US military must pivot toward lower-cost, autonomously controlled weapons systems. You can see this shift perhaps most clearly in the Defense Department’sReplicator Initiative,” designed to field “thousands of autonomous systems across multiple domains within the next 18 to 24 months, as part of the Pentagon’s strategy to counter China’s rapid armed forces buildup.” 

The US military is not developing autonomy to counter Chinese autonomy. It’s developing autonomy to counter China’s more conventional capabilities. 

 Myth #2: The Military Needs to Worry about Accidentally Building “Skynet” 

For the uninitiate, Skynet is the fictional AI system in the Terminator franchise that ends up targeting humans. In the Terminator universe, the US military initially developed Skynet as a means of reducing human error in military operations. They underestimated the system’s ability to reason and learn, however, and ultimately, US military leaders saw it as a threat. When they attempted to shut it down, Skynet responded by initiating a nuclear conflict with Russia. That is the post-apocalyptic future from which the Terminator (Schwarzenegger) visits 1984 in the first film. 

There are several reasons to think the US military will not create Skynet in our own timeline. For all the movie’s fictions, one historical element James Cameron got quite right is that during the Cold War, emerging technology really was driven by Pentagon investments. These technologies were designed with a military purpose, but often had positive spillover effects for the rest of society. 

Perhaps the best example of this phenomenon is the global positioning system (GPS). GPS was a “joint civil/military program” that sought to make the proliferation of navigational aids across the globe by creating a precision navigation and timing capability in a constellation of satellites. The program started in 1973 and the public gained access to it a decade later. Though it was the US government who funded this major technological breakthrough, we all now use GPS all the time. It’s in our cars, our phones, and even our watches. During the Cold War, tech breakthroughs often flowed from the Defense Department out to the rest of the world. 

Our current era is different, though. The US government is no longer the driving force behind much of the US’s technology development. Instead, the private sector drives technology development and the US Government plays the role of a “fast follower.” That is, government organizations try to harness the technology developments industry provides and to incorporate them into government programs and systems. 

The field of AI experts is divided as to the long-term risks advanced AI might pose. But even if we face the prospect of some artificial general intelligent that will rise up against us—the Terminator of our own timeline—such a system is much more likely to be born in the AI labs of Silicon Valley than in the basement of the Pentagon. 

Myth #3: The Ethics Issues in Military AI Are All About Lethal Autonomous Weapons Systems 

In one sense, the literature on the ethics of autonomous weapons preceded an earnest discussion of the ethics of other military applications of AI. On my reading of the literature, the seminal paper on the ethics of autonomous weapons was Rob Sparrow’s “Killer Robots” in 2007. This was five years before the machine learning revolution began with Jeffrey Hinton’s AlexNet win at the annual ImageNet challenge. Since then, the AI ethics literature has been dominated by concerns about unintended consequences, for instance, unintentional bias on the basis of race, gender, or other protected categories grounded in disparities in the training data used to train machine learning models. 

Because the literature on the ethics of killer robots got the jump on the broader field of AI ethics, much of the literature on the ethics of military AI has focused on the ethics of lethal autonomous weapons systems. There is far more to the ethics of military AI, though, than the application of AI to lethal autonomous weapons systems. 

These applications of military AI outside the context of autonomous weapons are important because they will become commonplace. We can use our daily interactions with AI as an analogy. For most of us, our daily lives are impacted less by autonomous vehicles than by the recommender engines that tell us what we should watch, listen to, or buy; the optimization functions that help us to avoid traffic; and the AI-generated content that barrages our social media feeds. In the same way, even as AI technology is applied to autonomous weapons systems, AI will increasingly play a role in the vast array of military missions and tasks that are only tangentially related to autonomous weapons. AI will help intelligence analysts to sift through massive imagery datasets; it will help planners to conduct mission analysis and to develop possible courses of action; and it will help even traditionally crewed military systems to identify objects of interest. 

Though autonomous weapons systems raise important ethical questions, the ethical questions raised by military AI are much broader than lethal autonomous weapons systems. 

Leave A Comment