WASHINGTON — The Defense Innovation Unit has published new directives for how it plans to use the Pentagon’s recently adopted “Responsible AI Guidelines” in its commercial prototyping and acquisition efforts.

DIU’s RAI Guidelines provide a step-by-step framework for AI [artificial intelligence] companies, DoD [Department of Defense] stakeholders, and program managers that can help to ensure that AI programs align with the DoD’s Ethical Principles for AI and that fairness, accountability and transparency are considered at each step in the development cycle of an AI system,” Jared Dunnmon, technical director of the AI and machine learning portfolio at DIU, said in a statement.

Established in 2015, the DIU’s mission is to help DoD organizations use commercial innovation. As such, DIU has been working since March 2020 to integrate the Pentagon’s Ethical Principles for Artificial Intelligence with its ongoing AI efforts. Over the course of 15 months, the unit consulted with experts across industry, government and academia, including researchers at Carnegie Mellon University’s Software Engineering Institute.

The resultant guidelines will help the unit operationalize the five principles of ethical AI use that were recommended by the Defense Innovation Board — an advisory panel to the Pentagon — in 2020.

According to DIU’s statement, the guidelines have already had the following effects:

  • Accelerated programs by clarifying end goals and roles, aligning expectations, and acknowledging risks and trade-offs from the outset.
  • Increased confidence that AI systems are developed, tested and vetted with the highest standards of fairness, accountability and transparency in mind.
  • Supported changes in the way AI technologies are evaluated, selected, prototyped and adopted, and helped avoid potential bad outcomes.
  • Provoked and surfaced questions that have spurred conversations crucial for the success of AI projects.

“Users want to know that they can trust and verify that their tools protect American interests without compromising our collective values,” said John Stockton, co-founder of Quantifind, one of the companies providing feedback on the guidelines.

“These guidelines show promise for actually accelerating technology adoption, as it helps identify and get ahead of potentially show-stopping issues,” he said in a statement. “We’ve found that leaning into this effort has also served us well outside of government, by strengthening internal controls and producing transparency and patterns of trust that can also be leveraged with all users, both public and private.”

Nathan Strout covers space, unmanned and intelligence systems for C4ISRNET.

Share:
More In Artificial Intelligence