WASHINGTON — U.S. military leadership has hammered home, particularly in recent months, that artificial intelligence will play an important part in future offensive and defensive operations. But as the armed forces develop AI, it’s important to think about how to counter AI as well, said Lt. Gen. Edward Cardon, the director of the Army’s Office of Business Transformation.

“Do we need a counter-AI strategy?” he asked as food for thought at an Association of Unmanned Vehicle Systems International conference on Feb. 7.

Cardon told reporters following his speech that “we shouldn’t be naive to think that something we build is not going to be in some way countered, and we should start thinking about that as we are building [artificial intelligence capability], not as an afterthought.”

AI is already present in Army- and Defense Department-wide capabilities, from processing information and data into digestible ways for analysts, to autonomously detecting incoming threats and responding to them without a human considering the next move.

As an example, Cardon pointed to active protection systems that the Army is preparing to rapidly field on combat vehicles in the near future.

For APS to deflect an incoming munition, it needs “near-instantaneous reaction” and can’t rely on a human brain to make the decision to act quickly enough. And microprocessors in APS are now fast — faster than bullets flying toward a target — making it possible to counter threats aimed at combat vehicles, Cardon said.

And AI technology could be taken even further, becoming a defense system embedded in APS that could track exactly where the bullet was coming from and inform a weapon that can automatically slew onto the target, he said.

But AI can be tricked, he said, so how can it be trusted to interpret data correctly and act accordingly?

“If you understand how the technology works, then you are putting great trust in the program and the validity of the data,” he said. “If you don’t have that kind of trust, how do you know it’s good,” he asked, given the technologies that can trick AI through cyber or electronic warfare?

For instance, it’s been shown that AI can be tricked by changing a pixel in an image. A system programmed to identify the subject of the photo be unable to do that just through a small tweak.

“They only changed a few pixels, so if we are starting to build systems that are identifying targets this way, you can start to trick the AI using the very technology that gives us the power,” Cardon said.

There is stiff competition in the world when it comes to developing AI. Cardon said China has made AI a national priority, not just a defense priority, and Russian innovation in Ukraine has shown its capability “across the whole spectrum of warfare.”

While the Army and the other services have made investments in AI, Cardon said he expects investments will grow and the service will harness work done in academia and the commercial world to advance the technology and capability.

But transforming AI into a capability suitable for defense is not as easy as it looks. “One of the challenges with the use of AI,” Cardon said, is that “war fighting doesn’t translate” from the work being done in the private sector. There is work to be done to design a system to identify targets in a military context, for example.

“The question is: Who does that?” he said. “That will take significant investment. Right now there is more investment moving in this area.”

But “I don’t think DoD has to do this all on their own,” he added.

Jen Judson is an award-winning journalist covering land warfare for Defense News. She has also worked for Politico and Inside Defense. She holds a Master of Science degree in journalism from Boston University and a Bachelor of Arts degree from Kenyon College.

Share:
More In Intel/GEOINT