← Return to search results
Back to Prindle Institute
War

What Role Should AI Play in War?

By Matthew S.W. Silk
27 Sep 2024
photograph of drone

This month, officials from over 60 nations met and agreed on a blueprint to govern the use of artificial intelligence in the military. Countries like the United States, the Netherlands, South Korea, and the United Kingdom signed an agreement stating that “AI applications should be ethical and human-centric.” (China was a notable holdout.) The agreement governs issues like risk assessments, human control, and the use of AI for weapons of mass destruction. With AI already being used by militaries and with such a wide variety of potential applications, significant questions and fears abound. For some, the technology holds the promise of ending wars more efficiently and (perhaps) with fewer casualties. Others, meanwhile, fear a Manhattan Project moment for the world that could change warfare forever if we are not careful.

The thought of bringing artificial intelligence to the battlefield often conjures the image of “killer robots.” And while there have been moves to create robotic military units and other forms of lethal autonomous weapon systems (LAWs), there are a great many potential military uses for artificial intelligence – from logistics and supply chain matters to guided missile defense systems. In the war zones of Ukraine and Gaza, AI has been increasingly utilized for the purposes of analyzing information from the battlefield to identify targets for drone strikes. There is also, of course, the possibility of applying AI to nuclear weapons to ensure an automated response as part of a mutually assured destruction strategy.

Given such a wide variety of potential applications, it is difficult to assess the various ethical drawbacks and benefits that AI may afford. Many argue that the use of AI will lead to a more efficient, more accurate, and more surgical form of warfare, allowing nations to fight wars at a lower cost and with less risk of collateral damage. If true, there could be humanitarian benefits as autonomous systems may not only minimize casualties on the opposing side, but it may keep one’s own human forces from being put in harm’s way. This not only includes physical harm, but also long-term psychological harm as well. There is also the argument that automated defense systems will be better able to respond to potential threats, particularly when there are concerns about swarms or dummy targets overwhelming human operators. Thus, the application of AI may lead to greater safety from international threats.

On the other hand, the application of AI to war-making poses many potential ethical pitfalls. For starters, making it easier and more efficient to engage in war-making might incentivize states to do it more often. There is also the unpredictable nature of these developments to consider, as smaller nations may find that they can manufacture cheap, effective AI-powered hardware that could upset the balance of military power on a global scale. Some argue that the application of AI for autonomous weapons represents another “Oppenheimer moment” that may forever change the way war is waged.

Another significant problem with using AI for military hardware is that AI is well-known for being susceptible to various biases. This can happen either because of short-sightedness on the part of the developer or because of limitations and biases within the training data used to design these products. This can be especially problematic when it comes to surveillance and for identifying potential targets and distinguishing them from civilians. The problem is that AI systems can misidentify individuals as targets. For example, Israel relied on an AI system to determine targets despite the fact that it made errors in about 10% of cases.

AI-controlled military hardware may also create an accountability gap when it comes to the use of the technology. Who should we hold accountable when an AI-powered weapon mistakenly kills a civilian? Even in situations where a human remains in control, there are concerns that AI can still influence human thinking in significant ways. This raises questions about ensuring accountability for military decisions and to ensure that they are in keeping with international law.

Another serious concern involves the opacity that exists within AI military systems. Many are built according to black box principles such that we cannot explain why an AI system reached the conclusion that it did. These systems are also classified, making it difficult to identify the responsible party for poorly designed and poorly functioning AI systems. This creates what has been described as a “double black box” which makes it all but impossible for the public to know if these systems are operating correctly or ethically. Without that kind of knowledge, democratic accountability for government decisions.

Thus, while AI may offer promise for greater efficiency and potentially even greater accuracy, it may come at great cost. And these tradeoffs seem especially difficult to navigate. If, for example, we knew an AI system had a 10% error rate, but that a human error rate is closer to 15 or 20%, would that fact prove decisive? Even given the concerns for AI accountability? When it comes to military matters the risks of error carry enormous weight, but does that make it more reckless to use this unproven technology or more foolhardy to forgo the potential benefits?

 

Matt has a PhD in philosophy from the University of Waterloo. His research specializes in philosophy of science and the nature of values. He has also published on the history of pragmatism and the work of John Dewey.
Related Stories