Eager to better exploit the vast amount of video flowing in from drones, the Marine Corps is working to better train its people and to hire outside analysts. In September, the service hired contractor Portal Dynamics for $49 million to provide video analysts to embed with deployed Marine units.
The Corps had advertised in May on the Federal Business Opportunities website for “motion video analysts to support national operational forces by providing real-time 24/7/365 motion video exploitation at deployed locations.” And that’s just part of the Marines’ effort to figure out how to squeeze more relevant information out of gathered video.
Whenever a drone is aloft and over its target, there is a Marine watching the live video feed, and the stream is downloaded and saved into a local storage device belonging to the UAV’s squadron. But the ever-growing mountain of video is all but unsearchable. Much of the video has no internal tags that would allow a Marine to find, for example, footage of a particular target or action, such as a few seconds of footage of an insurgent planting an improvised explosive device.
Master Sgt. Christopher Jourdan, operations chief for the Corps’ 1st Intelligence Battalion and former imagery chief at Marine Corps headquarters, said the service is attacking the problem in three ways: upgrading software, tagging more video and improving the training of its imagery analysts.
Software is improving, and the use of meta-tagging is growing — the Army has already started the practice — but the Marines are playing catch-up. The service is hoping software can help tag video with time and location data. And it is turning its focus to training, which has been inadequate, Jourdan said.
“For the most part, this training has been on-the-job training. They go out to the VMU [unmanned aircraft squadron] and they get a crash course,” he said. “These Marines, when they started out, they were doing quick happy-snaps with video or providing overwatch. They’ve actually gone into a little more of the motion imagery analysis.”
The service recently bulked up the pre-deployment training. Within the past three years, video training has jumped from six to 18 hours in an imagery course — and that curve will continue, Jourdan said.
“We need a lot more training because the FMV or motion imagery assets out there have quadrupled, and we need to get more of our imagery analysts formally trained on full-motion video,” he said. “What we’d like to do is encompass that in our basic training.”
More detailed analysis is done by highly trained FMV analysts like those provided by Portal Dynamics. These specialists sit at the ground stations of Shadow and ScanEagle aircraft, watching the video they collect and transmitting relevant information through real-time chat or over the radio to the warfighter.
“Full-motion video exploitation has really multiplied,” Jourdan said. “We’ve really got more intel involvement with these analysts. They’re still working from location with the VMU, but they’re providing that detailed reporting out to guys on the ground.”
Jourdan said that, as technology improves and video continues to be refined, the military could use it to direct strikes, shrinking the time needed to respond to emerging threats.
Lance Menthe, a physical scientist and analyst at Rand, said the Air Force has deployed FMV analysis cells to support MC-12W manned aircraft, but he hasn’t heard of deploying analysts directly with ground units. He agrees that military video analysis still has a long way to go.
“Although the use of FMV has matured in terms of counterterrorism and counterinsurgency operations, in many ways it is still in its infancy in terms of the full spectrum of military operations,” Menthe said. “I think it will be many years before we realize just how much the advent of real-time intelligence has changed the way we think about what is possible.”
Menthe said deploying FMV analysts with Marine units could, “in principle, provide the tightest possible analysis loop, since the FMV analysts could then be in direct, face-to-face contact with the commanders they are supporting,” he said. “However, FMV analysis is already considered to be quite responsive, almost real-time, so it is not immediately clear how much improvement could be attained by embedding, and this would need to be weighed against the risk to the analyst.”
It might be a more effective use of resources to have those experts performing “forensic” FMV analysis, seeking to fuse video data with other intelligence, instead of being in the thick of a tactical fight, he said.
Still, the sheer amount of data remains a daunting challenge, Menthe said.
“The biggest obstacle for effective, timely analysis of FMV is that there is soon going to be too much of it,” he said. “In the future, when wide-area motion imagery becomes commonplace, FMV analysts will need technological assistance to help them make the best use of it.”
Some companies are betting on such technological advancements and a growing market for FMV analysis.
BAE Systems describes itself as “one of the largest providers of FMV analysts and training,” said spokeswoman Lisa McCarty. And Michael Ehrlich, product manager of GEOINT Enterprise Solutions at ITT Exelis Geospatial Systems, said the company is focused on turning video into actionable intelligence. Ehrlich said the company is seeking to go beyond processing single streams of video to correlating that video with other incoming intelligence.
“It’s to help that tactical [operator] get to that 20 seconds of video he needs,” he said.
This article appeared in the January/February edition of C4ISR Journal.