The Pentagon made headlines last month when it adopted its five principles for using artificial intelligence, marking the end of a months-long effort over what guidelines the department should follow as it develops new AI tools and AI-enabled technologies.

Less well known is that the intelligence community is developing its own principles governing the use of AI.

“The intelligence community has been doing it’s own work in this space as well. We’ve been doing it for quite a bit of time,” Ben Huebner, chief of the Office of Director of National Intelligence’s Civil Liberties, Privacy, and Transparency Office, said at an Intelligence and National Security Alliance event March 4.

According to Huebner, ODNI is making progress in developing its own principles, although he did not give a timeline for when they would be officially adopted. They will be made public, he added, noting there likely wouldn’t be any surprises.

“Fundamentally, there’s a lot of consensus here,” said Huebner, who noted that ODNI had worked closely with the Department of Defense’s Joint Artificial Intelligence Center on the issue.

Key to the intelligence community’s thinking is focusing on what is fundamentally new about AI.

“Bluntly, there’s a bit of hype,” said Huebner. “There’s a lot of things that the intelligence community has been doing for quite a bit of time. Automation isn’t new. We’ve been doing automation for decades. The amount of data that we’re processing worldwide has grown exponentially, but having a process for handling data sets by the intelligence community is not new either.”

What is new is the use of machine learning for AI analytics. Instead of being explicitly programmed to perform a task, machine learning tools are fed data to train them to identify patterns or make inferences before being unleashed on real world problems. Because of this, the AI is constantly adapting or learning from each new bit of data it processes.

That is fundamentally different from other IC analytics, which are static.

“Why we need to sort of think about this from an ethical approach is that the government structures, the risk management approach that we have taken for our analytics, assumes one thing that is not true anymore. It generally assumes that the analytic is static,” explained Huebner.

To account for that difference, AI requires the intelligence community to think more about explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic.

“If we are providing intelligence to the president that is based on an AI analytic and he asks--as he does—how do we know this, that is a question we have to be able to answer,” said Huebner. “We’re going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient.”

ODNI is also building an ethical framework to help employees implement those principles in their daily work.

“The thing that we’re doing that we just haven’t found an analog to in either the public or the private sector is what we’re referring to as our ethical framework,” said Huebner. “That drive for that came from our own data science development community, who said ‘We care about these principles as much as you do. What do you actually want us to do?’”

In other words, how do computer programmers apply these principles when they’re actually writing lines of code? The framework won’t provide all of the answers, said Huebner, but it will make sure employees are asking the right questions about ethics and AI.

And because of the unique dynamic nature of AI analytics, the ethical framework needs to apply to the entire lifespan of these tools. That includes the training data being fed into them. After all, it’s not hard to see how a data set with an underrepresented demographic could result in a higher error rate for that demographic than the population as a whole.

“If you’re going to use an analytic and it has a higher error rate for a particular population and you’re going to be using it in a part of the world where that is the predominant population, we better know that,” explained Huebner.

The IC wants to avoid those biases due to concerns over privacy, civil liberties, and frankly, accuracy. And if biases are introduced into an analytic, intelligence briefers need to be able to explain that bias to policy makers so they can factor that into their decision making. That’s part of the concepts of explainability and interpretability Huebner emphasized in his presentation.

And because they are constantly changing, these analytics will require some sort of periodic review as well as a way to catalog the various iterations of the tool. After all, an analytic that was reliable a few months ago could change significantly after being fed enough new data, and not always for the better. The intelligence community will need to continually check the analytics to understand how they’re changing and compensate.

“Does that mean that we don’t do artificial intelligence? Clearly no. But it means that we need to think about a little bit differently how we’re going to sort of manage the risk and ensure that we’re providing the accuracy and objectivity that we need to,” said Huebner. “There’s a lot of concern about trust in AI, explainability, and the related concept of interpretability.”

Nathan Strout covers space, unmanned and intelligence systems for C4ISRNET.

Share:
More In Artificial Intelligence