The Defense Advanced Research Projects Agency recently launched a new research program to enhance the safety and predictability of autonomous systems including driverless vehicles and unmanned aerial vehicles (UAVs).
The program, called Assured Autonomy, mirrors efforts in the commercial sector.
The expected rise of autonomous vehicles has led many to seek means of assuring their safe operation. Gartner reports that more than 46 companies, as of mid-2017, were building artificial intelligence (AI)-based software to help guide autonomous vehicles.
AI is exactly what the military has in mind as it looks for ways to ensure predictable operations of autonomous systems in highly complex combat situations. “They need to be able to operate in uncertain environments, which means they need to be learn, to construct their own behaviors,” said Sandeep Neema, DARPA’s Information Innovation Office (I2O) program manager.
The Defense Department has been keeping close watch on issues surrounding the rise of autonomous vehicles. In 2016 the Defense Science Board issued a Report on Autonomy, stressing the need for systems to operate safely and predictably.
“Emphasis is placed on assurance that the system will perform as intended and be as immune as possible to unintended loss of control, capture, or compromise by the adversary,” the report notes. “[W]ill the system be available when you need it and perform as designed and directed?”
Trustworthiness is the key concept underlying DARPA’s research here. Assured Autonomy seeks to develop systems that will enable self-piloted vehicles to make smart decisions: not combat decisions, not operational choices, but decisions about where and how to maneuver in order to complete a human-defined mission, with minimal risk of mishap.
“We want to have a collection of tools that can provide a quantifiable level of assurance around the safety and reliability of these systems, especially around navigation and collision avoidance,” Neema said. “If you want to reach a certain spot within a certain time frame, we want to provide assurance that the system can do that.”
In the commercial world there’s little confidence that this can be achieved with today’s algorithms. A Massachusetts Institute of Technology study of consumer attitudes toward autonomous transportation found that 29 percent “don’t trust it” and 21 percent think “it’s unsafe.”
In a release announcing its Consumer Trends in Automotive online survey, Gartner research director Mike Ramsey highlighted exactly the issues DARPA seeks to address. “Fear of autonomous vehicles getting confused by unexpected situations, safety concerns around equipment and system failures and vehicle and system security are top concerns around using fully autonomous vehicles,” he said.
DARPA’s effort will tackle these worries at the level of basic science, as researchers try to devise a learning system that will empower to autonomous vehicles to better understand their rapidly changing surroundings. “What are the languages we will use? What will the abstractions be?” Neema said. At this early stage, “we need to build a new language to describe learning-enabled systems.”
These systems will need to enable driverless vehicles and UAVs not just to perceive a present situation but also to rapidly calculate an immediate potential future. “You take an action, turn the steering wheel by three degrees, and you want to know what the impact will be three seconds down the line,” Neema said. “You want to compute that in advance and you want to evaluate many, many actions simultaneously in a highly dynamic environment.”
These complex calculations won’t happen in a vacuum. Rather, the behavior of the driverless system will have to be determined by specific mission goals, adding a further layer of complexity to the needed algorithms.
“For example, a ship operates autonomously in the open ocean and when it encounters other ships you want to ensure it will not collide with these other ships,” Neema said. “The learning algorithm is somehow guarded so that the decisions it makes under these conditions conform to certain properties, with verifications to tell you whether the algorithm is fulfilling that safety property.”
DARPA’s recent launch of Assured Autonomy kicks off the first of three planned phases in a program set to run until 2022. There’s a lot to be done between now and then if the military is going to be able to fulfill its autonomy ambitions with any degree of confidence.
“We should not downplay the difficult of these things,” Neema said. “These are complex algorithms, and providing comprehensive proofs is really hard.”