Artificial intelligence is code entrusted to reason through independent choices. The decisions themselves are the result of coded paths and inputs, feeding data into different parts of algorithms, weighting outcomes, and then creating an end product that is designed to be useful to the humans that consume it.

There are degrees and worlds in AI, a vast space from deterministic and emergent behavior, from online machine learning to targeting tools, and it is in that complexity that care is most required, that human direction is most desired, that the fears and possibilities of generations of science-fiction authors await to be realized. It is a complexity, in form and function, that is lacking from both the White House’s new AI strategy. For all intents and purposes, the document guiding the government’s approach to artificial intelligence might as well say “fancy software” and be done with it.

In the “Executive Order on Maintaining American Leadership in Artificial Intelligence,” the White House states that its efforts in AI are guided by five principles. Those principles are, roughly: driving technological breakthroughs; development of technical standards; training workers with new skills for an AI economy; balancing trust in AI and protection of civil liberties; and supporting “an international environment that supports American AI research and innovation and opens markets for American AI industries.”

Nowhere is the how or the what of AI spelled out. There is AI investment throughout the federal government, and the White House wants to make sure it continues in a permissive environment, but the term is an umbrella, a catch-all, with no specificity as to what it does, or why it might require efforts to train new workers.

“I applaud a number of aspects of the executive order, such as the proposal — mirroring the white paper I released last summer — to open federal data-sets to non-federal entities” said Sen. Mark R. Warner, D-Va., vice chairman of the Senate Select Committee on Intelligence. “Overall, however, the tone of this executive order reflects a laissez-faire approach to AI development that I worry will have the U.S. repeating the mistakes it has made in treating digital technologies as inherently positive forces, with insufficient consideration paid to their misapplication.”

Warner’s white paper lightly describes AI, highlighting the use of machine learning to train algorithms on pattern recognition in images, for example. But overall the paper is focused not on the risks of how AI is coded, but on the dangers that could come from a small segment of the technology industry capturing and dominating an emerging market.

If there’s an obvious precedent to all this, it’s rooted in the early 1990s. A confluence of factors, including a push from the Clinton administration, proactive legislation in Congress and favorable rulings from the courts created the conditions that would foster the Silicon Valley-led tech industry in its current form. Parallel to that initiative was getting the government itself to adopt the then-young technologies of the information age, all under the aegis of “reinventing government.”

While the White House is light on the specifics of what it wants AI to do, or even what exactly AI is, the Pentagon has a far more workable definition.

“AI refers to the ability of machines to perform tasks that normally require human intelligence — for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action — whether digitally or as the smart software behind autonomous physical systems,” reads the Department of Defense AI strategy summary.

It helps that different parts of the Pentagon have been defining and working on AI in different ways. In June 2018, the Pentagon stood up the Joint Artificial Intelligence Center, or JAIC, and the military is already involved in the development and acquisition of specific AI projects. Project Maven, likely the most famous of these, was contracted through Google to adapt open-source tools to identify objects in drone videos.

“The impact of artificial intelligence will extend across the entire department, spanning from operations and training to recruiting and healthcare,” said Dana Deasy, the Defense Department’s chief information officer. "The speed and agility with which we will deliver AI capabilities to the war fighter has the potential to change the character of warfare. We must accelerate the adoption of AI-enabled capabilities to strengthen our military, improve effectiveness and efficiency, and enhance the security of our nation.”

The overall approach is focused on AI-enabled capabilities, using a shared and scalable foundation for AI, training an AI workforce, working with business, academia and allies, and leading the world in “military ethics and AI safety.” That last section is likely to attract standalone significance, especially since some companies have already made their future work with the DoD contingent to how it responds to questions of AI ethics

“We will invest in the research and development of AI systems that are resilient, robust, reliable, and secure; we will continue to fund research into techniques that produce more explainable AI; and we will pioneer approaches for AI test, evaluation, verification and validation,” the document reads. “We will also seek opportunities to use AI to reduce unintentional harm and collateral damage via increased situational awareness and enhanced decision support.”

The strategy is a promise to develop principles more than an outline of principles themselves. Of particular note is the notion of AI as a tool explicitly for the reduction of harm and collateral damage. That same language is often used in the explanations for the use of precision weapons, though the circumstances guiding the development of precision weapons were all about battlefield utility first. AI that can reduce collateral damage is also likely AI that can be part of autonomous targeting. To its credit, the DoD strategy specifically acknowledges the risk that might come from “'emergent effects’ that arise when two or more systems interact, as will often be the case when introducing AI to military contexts.”

Borrowing and adapting innovations from the commercial space for tasks such as predictive maintenance and supply could be part of a quiet logistics revolution within the military. Warehouses adequately stocked with exactly what is needed or parts moved to the bases and troops before shortages occur could sustain operations at reduced cost, as imagined by the strategy, or at a higher tempo, as is possible when commanders adjust to the new normal.

Indeed, one of the better case scenarios for the military and the government’s adoption of more AI technologies is simply adapting what has already worked in the private sector and with contractors to new missions. If there is funding and effort behind the new AI plans, it could realize many of these advantages first in cybersecurity.

“Today, AI is at the center of most major technological advances in areas as varied as cybersecurity, self-driving cars or development of cancer treatments,” says Dmitri Alperovitch, the co-founder and chief technical officer of CrowdStrike. “In cybersecurity, for example, these technological advances include enabling the defenders to recognize and stop never-before-seen attacks and being faster in detecting and responding to attacks.”

How, exactly, the United States fosters, adopts, and uses the tools powered and enabled by AI is likely of major importance for decades to come. The existence of national strategies for AI suggest that the government at least knows it needs to take AI seriously. Both the DoD and the White House seem content to follow the lead of the private sector, where there is a tremendous amount of work being done on everything from warehouse management to automated surveillance systems and innovative cyber security techniques. Yet if there is any overall ethos guiding the strategies, it is a need to meet the technology as it already is. Much of the government’s AI strategy is about playing catch-up.

Kelsey Atherton blogs about military technology for C4ISRNET, Fifth Domain, Defense News, and Military Times. He previously wrote for Popular Science, and also created, solicited, and edited content for a group blog on political science fiction and international security.

Share:
More In IT and Networks