Advertisement

You will be redirected to the page you want to view in  seconds.

AI For Intel: Incremental Advances But No 'Big Brain'

Jun. 5, 2013 - 06:53PM   |  
By GABRIEL MILLER   |   Comments
  • Filed Under

Smart enough to kill?

In April, a United Nations report called for an international moratorium on the development of “lethal autonomous robots” — weapons that can “decide” themselves, without human input, when to fire and kill people.
Drafted by Christof Heyns, the U.N.’s special rapporteur for extrajudicial killings, the report draws a distinction between such robots and UAVs, like Reapers or Predators, which fire weapons under the control of operators far away back on the ground. The U.N. report is specifically about a new technological dimension: “targeting decisions could be taken by the robots themselves.”
The concern about “lethal robots” goes to the heart of artificial intelligence technology; no decision can be more important then the use of deadly force in war.
In November, the Pentagon laid out guidelines for the development of autonomous weapons, saying that they had to be designed to include “appropriate levels of human judgment over the use of force.” Even with the guidelines, however, the level of human involvement necessary for these autonomous weapons isn’t entirely clear. There is a stark difference between a weapon that requires a soldier to confirm a target each and every time it fires, and one that halts only when a soldier realizes that a target has been misidentified.
There is no consensus on the issue. Even the U.N.’s report suggests that accountability in warfare may actually be better with robots because they “can be programmed to leave a digital trail, which potentially allows better scrutiny of their actions than is often the case with soldiers and could therefore in that sense enhance accountability.”
Ronald Arkin, a roboticist at the Georgia Institute of Technology, has advised the Pentagon on the question of whether robots can be made to operate ethically. He believes that the laws of war and international humanitarian law can be programmed into military robots to guide their behavior on the battlefield.
“Atrocities are an ever-present phenomena in the battlefield,” Arkin said. “As such, looking at human-level performance, it became clear to me that intelligent robots can potentially do better than human beings.”
Arkin says machines would not “replace soldier morality.” But he argues that in narrow, constrained circumstances like countersniper operations or clearing buildings, a robot may actually do a better job of distinguishing between enemies and non-combatants than a human being.
“That shouldn’t be that hard to imagine,” he said, “because we already have robots that are stronger than people, robots that are faster than people, we have others that are smarter than people. So to me, it is not that far a leap that we can potentially create intelligent robotic systems that can outperform us with respect to ethical behavior in the battlefield. We treat each other very poorly in the battlespace, and so why couldn’t we not create intelligent robotic systems that do better?”
Missy Cummings, a professor in the robotics department at MIT’s Computer Science and Artificial Intelligence Laboratory, finds Arkin’s thesis alarming.
“We totally disagree on the point. [Arkin] believes you can make more ethical robots than humans,” Cummings said. “Both as a former military pilot and as an academic, I simply disagree with this because I do not think that you can ever get, first of all, the sensors developed to truly be able to distinguish a foe from a friend, because there are so many nuances in that way, and then ultimately, if a robot pulls the trigger and you shoot a child, for example, accidentally, who’s going to be responsible for that?
“The military isn’t perfect, but they do have a clear responsibility chain,” she said. “I’m concerned that automated systems provide us a moral buffer, between us and our actions, so we don’t really have to think about it if a robot is doing it.”
— Gabriel Miller

More

In October, the Defense Advanced Research Projects Agency premiered its most ambitious humanoid robot contest ever: a competition to build robots that could replace humans in disaster areas like the 2011 multiple-reactor meltdown at Japan’s Fukushima nuclear power plant.

Entrants in the challenge must design robots to perform feats of strength and mobility, such as removing debris and climbing ladders, as well as chores normally thought of as involving intellect: autonomous decision-making tasks like choosing which leaking valve to close.

DARPA’s competition lies at the intersection of robotics, machine learning and artificial intelligence.

“From a technology standpoint, I look at it as being unprecedented,” said Tony Stentz, director of the National Robotics Engineering Center at Carnegie Mellon University’s Robotics Institute.

Yet DARPA’s ambitious test underscores a key conundrum in artificial intelligence. Instead of proving how advanced AI is, it may prove how short it has fallen.

On the one hand, automation of human analytical tasks, particularly in defense and intelligence, is undergoing a revolution. Software algorithms whiz through video and read through transcripts of phone intercepts. Completely autonomous unmanned aircraft like the X-47B are now in testing.

But the discipline is fragmented. Even as individual groups are producing extraordinary technological feats in select areas, these advances are rarely unified into cohesive wholes. For example, while a computer can fly an unmanned aerial vehicle, experts say the foreseeable future holds no chance that the computer will be able to actually analyze and process the data that the UAV is gathering.

Which is to say: Artificial intelligence can’t actually do the intelligence part of the job.

OVERHYPED

It’s a far cry from the early promises of AI.

Fifty years ago, pioneers such as Herbert Simon at Carnegie Mellon had predicted that “machines will be capable, within 20 years, of doing any work a man can do.”

To a public weaned on Terminator’s Skynet artificially intelligent machine network or Arthur C. Clarke’s HAL 9000 — and in many senses compared with the expectations of the field’s founders — artificial intelligence appears woefully behind.

“The whole phrase ‘artificial intelligence’, it’s really become — for people inside the business, we try not to use it, because it has such negative connotations today, as opposed to 10 years ago. And the reason is that AI has not lived up to its hype,” said Missy Cummings, a professor in the robotics department at Massachusett Institute of Technology’s Computer Science and Artificial Intelligence Laboratory. “We are nowhere [near] where we promised that we would be 10 years ago, and so it’s been overhyped.”

Even defining “artificial intelligence” has been a problem: It’s more like a concept than a term with specific meaning.

“Defining AI has been a challenge for the AI community for a while. It’s almost a joke in the field,” said Mike van Lent, the CEO and chief scientist of SoarTech, a company that uses artificial intelligence for simulation-based and immersive training applications. “For me, what I think of when I think about AI — it’s a system that has knowledge, the system needs to know things, and be using that knowledge to process incoming information, make sense of it and then send out some sort of conclusion from that application of knowledge to incoming sensory information.

“Within the world of AI,” van Lent said, “that’s an ‘agent-oriented view’ of AI.”

Rather than producing a single device instantly recognizable as AI to the lay community, the field is more balkanized. One of the side effects of this fragmentation is that modest AI advances permeate daily life without much notice. Siri, the “intelligent software assistant” that comes standard on Apple’s newest iPhones, began in the Pentagon, under DARPA’s CALO program to develop a cognitive assistant. Commercial navigation systems based on GPS are another example.

“Today, even commercial aircraft have a significant amount of automation and the human is really just there as a babysitter, and that is why UAVs have become so popular, because there is truly a revolution that has gone on in the aviation community,” Cummings said.

SENSE AND AVOID?

Aviation, in some senses, represents an ideal platform for artificial intelligence’s essential function: to replace complicated human tasks with automation. Cummings, who was a Navy fighter pilot before entering academia, says that “pretty much all of aviation in terms of the pilot, can, and should, be handed over to the machine.”

She points out that Chesley Sullenberger, the celebrated US Airways pilot who made an emergency landing in the Hudson River in New York City in 2009, lost several minutes searching the cockpit for the plane’s emergency procedure manual, something an automated system could locate and retrieve in seconds.

Aviation lends itself to AI because airspace is relatively predictable; maneuvering on the ground requires adapting to a much more complex environment. Nevertheless, DARPA’s Robotics Challenge builds on the agency’s previous, successful challenges that catalyzed the development of driverless cars. For Afghanistan, the agency developed a vehicle-based system that could detect, locate, and potentially engage enemy shooters firing with anything from bullets and rocket-propelled grenades to anti-tank guided missiles.

ANALYZING VIDEO

Some of the most promising use of AI involves big data analysis.

“The use of machine learning to learn to recognize recurring patterns, unusual deviations or deeply buried patterns of behavior,” said Gary Edwards, who directs the Informatics Lab at Lockheed Martin’s Advanced Technology Laboratories. “We are on the edge of a revolution in how we make use of massive data.”

It’s no secret that the vast output from the growing legions of ISR platforms is far too much for analysts to routinely sift through.

“Perception, with emphasis on full-motion video and exploitation of multiple sources of sensor data, is a very active area” of development in AI, Edwards said. “While analysts are inundated with visual data, understanding what’s in the data is still a time-consuming, human-intensive activity.”

In February, DARPA gathered representatives from the White House, the FBI and U.S. academic institutions to discuss XDATA, a program “launched to create tools to assimilate and process mountains of data that come in disparate types and sizes, and then provide visualization tools to allow users to analyze trends and glean value from the data.”

DARPA also is developing a number of technologies that combine imagery with machine learning. The agency’s Mind’s Eye program is working to replace scouts on the ground with a smart camera that not only describes everything it sees, but “reason[s] about what it cannot see.” Language programs hope to do everything from reading natural text to “allowing English speakers to understand foreign-language sources of all genres.”

Is this truly artificial intelligence? Much of the military’s most advanced projects fall into the category of “optimizers,” technology that reliably does a very narrow task, like reading, better or faster than a human, but is not capable of the kind of learning and reasoning often associated with intelligence as a concept.

The stuff that’s fielded tends to be extraordinarily reliable, it has to be ... but the fact that it’s reliable means that it’s probably not all that sophisticated,” said Paul Bello, a program officer in Human and Bioengineered Systems at the Office of Naval Research. “It’ll help, but are the systems really smart in the sense that you and I think about smart?

“What is depressing to me about AI is that we’ve given up on the generalist program,” Bello said. “It used to be that people a long time ago ... [were] really working on building integrated architectures for the mind. But if you look at the kind of stuff that goes on in AI today, and if you look at the most popular, [what] you see the most of, it’s really very specialized and most of the work revolves around optimizing things, building an optimal system.”

As scientists attempt to design more human-like architectures of artificial intelligence, they realize just how little we understand about our own human reasoning and decision-making processes.

“This is the kind of fallibility of this whole problem with machine learning and supervised learning — that we ourselves as humans are trying to tell the computer that this is the rule set,” Cummings said. “And we ourselves don’t exactly know the rules that we make decisions by.”

All of that doesn’t mean that computer “brains” can’t be made to work, but that they may need to be imagined in another way.

“The most important point,” she said, “is that we are humans and by definition we are fallible, and when we try to teach computers to learn, or basically teaching them our own view of looking at the world, that may not be the right way.”

More In C4ISR Journal

Start your day with a roundup of top defense news.

More Headlines

Shutdown undermines cybersecurity

Shutdown undermines cybersecurity

With fewer eyeballs monitoring the government's networks for malicious activities and an increasing number of federal systems sitting idle during the shutdown, security experts fear it could create a perfect storm for insiders and hackers looking to do ag

Exclusive Events Coverage

In-depth news and multimedia coverage of industry trade shows and conferences.

TRADE SHOWS:

CONFERENCES: