If It's Not Ethical, They Won’t Field It: Pentagon Release New A.I. Guidelines

FacebookXPinterestEmailEmailEmailShare
The Pentagon is investing billions in artificial intelligence to mine data that could help them win the next war. Officials have said they are actively working to "refine information analysis" through AI, to eventually reach operators on the ground or in the sky in a decisive and streamlined way. (US Army illustration)
The Pentagon is investing billions in artificial intelligence to mine data that could help them win the next war. Officials have said they are actively working to "refine information analysis" through AI, to eventually reach operators on the ground or in the sky in a decisive and streamlined way. (US Army illustration)

The Pentagon has vowed that if it cannot use artificial intelligence on the battlefield in an ethical or responsible way, it will simply not field it, a top general said Monday.

Air Force Lt. Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center (JAIC), made that promise as the Defense Department unveiled new A.I. guidelines, including five main pillars for its principled execution of A.I.: to be responsible, equitable, traceable, reliable and governable.

"We will not field an algorithm until we are convinced it meets our level of performance and our standard, and if we don't believe it can be used in a safe and ethical manner, we won't field it," Shanahan told reporters during a briefing. Algorithms often offer the calculation or data processing instruction for an A.I. system. The guidelines will govern A.I. in both combat and non-combat functions that aid U.S. military use.

Related: Pentagon Wants to Use AI to Predict the Next Wildfire or Earthquake

The general, who has held various intelligence posts, including overseeing the algorithmic warfare cross-functional team for Google's Project Maven, said the new effort is indicative of the U.S.'s intent to stand apart from Russia and China. Both of those countries are testing their uses of A.I. technology for military purposes, but raise "serious concerns about human rights, ethics, and international norms."

For example, China has been building several digital artificial intelligence cities in a military-civilian partnership as it looks to understand how A.I. will be propagated and become a global leader in technology. The cities track human movement through artificial facial recognition software, watching citizens' every move as they go about their day.

While Shanahan stressed the U.S. should be aggressive in its pursuits to harness accurate data to stay ahead, he said it will not go down the same path of Russia and China as they neglect the principles that dictate how humans should use A.I.

Instead, the steps put in place by the Pentagon can hold someone accountable for a bad action, he said.

"What I worry about with both countries is they move so fast that they're not adhering to what we would say are mandatory principles of A.I. adoption and integration," he said.

The recommendations came after 15 months of consultation with commercial, academic and government A.I. experts as well as the Defense Innovation Board (DIB) and the JAIC. The DIB, which is chaired by former Google CEO Eric Schmidt, made the recommendations last October, according to a statement. The JAIC will be the "focal point" in coordinating implementation of the principles for the department, the statement said.

Dana Deasy, the Pentagon's Chief Information Officer, said the guidelines will become a blueprint for other agencies, such as the intelligence community, that will be able to use it "as they roll out their appropriate adoption of A.I. ethics." Shanahan added the guidelines are a "good scene setter" for also collaborating alongside the robust tech sector, especially Silicon Valley.

Within the broader Pentagon A.I. executive committee, a specific subgroup of people will be responsible for formulating how the guidelines get put in place, Deasy said. Part of that, he said, depends on the technology itself.

"They're broad principles for a reason," Shanahan added. "Tech adapts, tech evolves; the last thing we wanted to do was put handcuffs on the department to say what you could and could not do. So the principles now have to be translated into implementation guidance," he said.

That guidance is currently under development. A 2012 military doctrine already requires a "human in the loop" to control automated weapons, but does not delineate how broader uses for A.I. fits within the decision authority.

The Monday announcement comes roughly one year after DoD unveiled its artificial intelligence strategy in concert with the White House executive order that created the American Artificial Intelligence Strategy.

"We firmly believe that the nation that masters A.I. first will prevail on the battlefield for many years," Shanahan said, reiterating previous U.S. officials positions on the leap in technology.

Similarly in 2017, Russian President Vladimir Putin said in a televised event that, "whoever becomes the leader in this sphere will become the ruler of the world."

-- Oriana Pawlyk can be reached at oriana.pawlyk@military.com. Follow her on Twitter at @Oriana0214.

Read More: Navy and Marine Corps to Ban Personal Firearms for Foreign Troops

Story Continues