A Global Hawk can fly for over 30 hours, filming the entire time. The MQ-9 Reaper can stay airborne for up to 27 hours. Long endurance and persistent surveillance is the promise of drones, the killer function that makes the uninhabited aerial vehicles such a fixture of modern warfighting. It’s one thing to put a drone with a camera in the sky for more than a day. It is another thing to process that footage, render it meaningful, and then take the finished product and hand it over to the pilots or soldiers or Marines or sailors who are tasked with acting on that information.
The Pentagon, like many companies looking for a way to handle the massive data sets and terabytes of film its recording, turned to the private sector, and in particular Google. Google’s involvement with processing drone data, specifically as part of Project Maven, has come under fire internally. Yesterday, about dozen Google employees resigned from the technology giant in protest.
The employees who are resigning in protest, several of whom discussed their decision to leave with Gizmodo, say that executives have become less transparent with their workforce about controversial business decisions and seem less interested in listening to workers’ objections than they once did. In the case of Maven, Google is helping the Defense Department implement machine learning to classify images gathered by drones. But some employees believe humans, not algorithms, should be responsible for this sensitive and potentially lethal work—and that Google shouldn’t be involved in military work at all.
Project Maven sits on the edge of a building maelstrom about technology companies, the Pentagon, the ways in which artificial intelligence is built, and the ends to which artificial intelligence is used. To untangle Maven from the maelstrom, a few clarifications. Artificial intelligence must be understood, not so much as a science fictional villain, as a tool that reads and learns and reads again. With Maven, the focus is apparently on identifying objects in images, and then having a human double-check its work. A Google spokesperson told Gizmodo in March that “the technology flags images for human review, and is for non-offensive uses only.”
The next hurdle is figuring out how, exactly, “non-offensive uses” are defined. Is “offensive” here meant to be part of a campaign designed to seize territory from an armed foe, like the anti-ISIS push into Mosul and later Raqqa. Or is “offensive” a euphemism for “kinetic,” itself a euphemism for “killing or destroying objects and/or people with weapons.” If “offensive” here means “kinetic,” then Project Maven is processing images from drones that are providing more surveillance than scouting, rather than the drones actively involved in firing missiles at targets.
“Google has emphasized that its AI is not being used to kill,” reports Kate Conger at Gizmodo, “but the use of artificial intelligence in the Pentagon’s drone program still raises complex ethical and moral issues for tech workers and for academics who study the field of machine learning.”
It is likely safe to say, then, that Project Maven while analyzes images and interprets those images, it does not then directly turn that analysis into a targeting order, the way say a sensor and processor on the Long Range Anti-Ship Missile might distinguish between targets and then decide one of multiple options to hit. Still, the term “non-offensive uses” is unclear at best, and if the role of Maven is to process surveillance footage for objects so that human commanders can then direct human troops in future attacks, the hair-splitting around “offensive” doesn’t address any of the concerns of the Google employees that resigned in protest over the project.
On May 10, before the resignations from Google but after the circulation of petitions within Google against Google’s participation in Project Maven, we got to see a brief glimpse into how the defense and intelligence communities are responding to such protest. Justin Poole, the Deputy Director of the National Geospatial-Intelligence Agency, spoke at the 17th annual C4ISRNET conference and after his presentation, an audience member asked specifically about the intelligence community’s response to skepticism in Silicon Valley.
“I think NGA does a fairly good job of getting out there, whether industry day symposiums like this,” said Poole, “and explaining the challenges that our warfighters face, that first responders face, that our policy makers face with respect to having advantage over adversaries that could do us harm.
“We continue, we intend to continue to do that, that’s our mission, 27 years with the agency and its predecessors,” Poole said, “and I’ve seen things like this come and go and if we keep our eye on the ball and understand the importance of what it is we’re doing, I think we’ll come out ahead in the end. And specific instances like you referenced that might need a little more targeted effort and I’m confident that the Pentagon and the teams working Maven are doing that. It’s a tough question, but my high-level answer is to continue to be consistent about the value to national security of GEOINT, and that’s why we continue to participate in activities like this.”
Nothing about the earlier petition within Google or the resignations from Google suggest, that the problem is one of thinking that combat deployments, intelligence analysis, or policy questions are not hard. Instead, the through-line of skepticism over Maven and objection to it reads as an overriding concern that the decisions it is used for are wrong. After 17 years of continuous war, of which only limited stages could be called offensive, making the case to Silicon Valley about “challenges that our warfighters face” may require more than just explaining why what service members are doing is hard. It will likely require trying to convince the technologists needed for this work that what the service members are doing is right.