WASHINGTON — By the end of 2018, the Air Force’s distributed common ground system will have tested a new open architecture backbone, paving the way for it to use algorithms and machine learning to exploit data, a service official told Defense News on Sept. 5.
The Distributed Common Ground System is the service’s primary intelligence processing system, collecting imagery, video feeds and signals from platforms like the U-2 spy plane, the RQ-4 Global Hawk and the MQ-9 Reaper. But the system, stood up in the mid 1990s, has a closed architecture, making it difficult to modify.
Once the DCGS open architecture backbone is complete, the Air Force will be able to upload new technologies to the system — such as algorithms that can interpret data and reduce the workload of intelligence analysts — much like a smartphone downloads apps, said Kenneth Bray, acting associate deputy chief of staff for intelligence, surveillance and reconnaissance.
The Air Force is taking a piecemeal approach to implementing the open architecture — for instance, testing the approach on DCGS video feeds at Beale Air Force Base, California, and then testing it on the signals intelligence feed at another base, he said.
“They just finished the first big operational demonstration, and we’re fairly satisfied with how it turned out. With anything, there are a few things to learn from it, so we’re going to have to go back and make some corrections, but we’re heading on to the next step,” Bray said.
“By this time next year, we will have tested all the pieces,” and the Air Force can begin rolling out the changes to all 27 DCGS sites.
For decades, the Air Force has manually had to analyze data coming into DSGS and other intelligence processing systems. That labor-intensive process often meant analysts were simply answering the question, “what is happening?” instead of finding patterns or determining what it will be doing next.
“You’d be sitting [and] looking at your imagery, I’d be sitting [and] listening to my signals intelligence, and you’d be talking — literally — to me,” he said. “There are algorithms you can write that can manually put that together. So instead of watching an entire chain of events unfold for 30 minutes and your finally arrive at an a-ha moment between you and myself, the a-ha moment can happen in five minutes or half a minute or five seconds.”
The Air Force’s intelligence community isn’t the only part of the service struggling to digest the massive amounts of data coming in from aircraft, drones, satellites and other platforms. Air Force Chief of Staff David Goldfein has often criticized the segmented, stovepiped nature of its command and control enterprise, saying that the service needs to incorporate autonomy and artificial intelligence to help operators make decisions faster.
As the Air Force works to transition DCGS to an open architecture, it is also helping the Defense Department lay the foundation to make DCGS more automated. Through an effort called Project Maven, the department hopes to collect a pool of “good data” that algorithms can process and adapt to — a process called machine learning.
“It we can get our enterprise turned over to labeling all the data that it currently generates, then all that data then becomes searchable and manipulated by any number of algorithms,” Bray explained.
The hope is that in the future, intelligence systems like DCGS will not only be able to help analysts find critical information—say, a truck linked to a terrorist group—it will be able to pull up every other time that truck has been seen by the Air Force over the past year.
“That’s something that in the past would have taken six months or a year to go manually track where has that vehicle been in the last year.”
Valerie Insinna is Defense News' air warfare reporter. She previously worked the Navy/congressional beats for Defense Daily, which followed almost three years as a staff writer for National Defense Magazine. Prior to that, she worked as an editorial assistant for the Tokyo Shimbun’s Washington bureau.