Is War Too Important to Be Left to Robots?

FacebookXPinterestEmailEmailEmailShare
Sea Hunter, an autonomous unmanned surface vehicle, arrives at Pearl Harbor to participate in RIMPAC 2022
Sea Hunter, an autonomous unmanned surface vehicle, arrives at Pearl Harbor to participate in the Rim of Pacific (RIMPAC) 2022. (Aiko Bongolan/U.S. Navy)

Gary Anderson was the founding director of the Marine Corps' Center for Emerging Threats and Opportunities.

The opinions expressed in this op-ed are those of the author and do not necessarily reflect the views of Military.com. If you would like to submit your own commentary, please send your article to opinions@military.com for consideration.

We know that the Russians have been experimenting with the weaponization of artificial intelligence (AI), but we have seen no decisive evidence that they have used it in any meaningful way in the Russo-Ukrainian War.

There are several potential reasons for this. The most likely is that they have not advanced far enough to trust AI for independent use on the battlefield. The least likely is any moral compunction about using such technology for weapons, given their brutality toward the civilian population of Ukraine and their relatively unhindered use of indiscriminate artillery and missile strikes. We also know that the Chinese and Iranians are examining the militarization of AI, but there is not much open-source information on the subject.

Killer AI has long been the source of science fiction speculation. "The Terminator" and "WarGames" movies were cautionary warnings of such weapons out of human control. In one famous "Star Trek" episode, AI is allowed to control the starship Enterprise in a military exercise and causes death and destruction until Capt. Kirk and his crew regain command. Until now, those kinds of issues were speculative. Today, technical advances are forcing us to address them for real.

Western nations, particularly the United States, have shown reluctance to allow AI to operate independently on the battlefield absent adult human supervision. However, that might change if an opponent shows a decided tactical and operational advantage to the use of AI. There is little doubt that AI can make decisions faster than human operators, but the question remains whether those decisions will be better?

Osama bin Laden's 2001 escape from the Tora Bora mountains in Afghanistan was largely blamed on the ponderous decision-making process in U.S. Central Command's targeting cell, where a committee of officers failed to agree to take a shot until the intended target had disappeared into its caves. The Marine Corps' Center for Emerging Threats and Opportunities (CETO) undertook an experiment to examine whether a simulated AI decision process might improve the targeting problem. Two teams were given an identical set of 20 targeting problems simulating a Predator unmanned aircraft armed with Hellfire missiles. The problems ranged from simple to very complex. Some involved civilians intermixed with hostile enemy fighters.

The first team was a human targeting cell comprising intelligence personnel, an operational lawyer and a public affairs specialist, and led by an experienced operations officer -- similar to the decision-making group in the Tora Bora situation. The second team assumed that the Predator had shoot or don't shoot decision-making AI aboard. The single human simulating the AI had a strict set of criteria on which to base decisions, simulating computer programming. The results were very interesting. Not surprisingly, the AI simulation made decisions faster than the targeting team, but both made the wrong decision about 20% of the time. The situations where the two went wrong generally differed but, as in actual combat, neither was immune to the fog of war.

The big difference here was accountability. The human team could be held responsible for its decisions, and it generally erred on the side of safety when innocent civilians appeared to be present. The AI was less constrained and was required to act strictly within the limits of the programming instructions. In one case, it fired into a street awning under which a group of armed insurgents had taken cover, killing a group of shoppers in a simulated souk. Situations such as this raise the question of who would be held responsible for the deaths. Would it be the person or persons who programmed the AI? Would it be the manufacturer? We could always decommission the aircraft, but what would that solve?

The real issue here remains moral. Except for the case of robot-on-robot combat, using AI will require the robot to decide to take human lives. As mentioned previously, our potential opponents will likely not be deterred by moral or legal concerns, and it is possible this could give them a distinctive tactical edge.

To some extent, we use limited computer decision making in weapons today. Cruise missiles and other "fire and forget" systems fly themselves to targets, but the initial decision to engage is still made with a human in the loop. If innocents are killed, there is an accountability chain.

Collateral civilian damage will always happen in war, and trade-offs in friendly lives saved are made, but they are humans' decisions. Leaving humans out of the loop should not rest purely on technical considerations. This is an issue potentially as serious as land mines and chemical and nuclear weapons. If war is too important to be left solely to the generals, we need to ask whether we want it in the hands of robots.

Story Continues