AI ethics: Pentagon on the verge of launching public web application to guide ‘responsible’ development – Breaking Defense

All Domains, Networks / Cyber, Pentagon

Michael Johnson IMG_5810 3

John Beiser, Senior Counsel, US Senate Committee on Commerce, Science, and Transportation, Matthew Johnson, CDAO Artificial Intelligence Officer, and Navasya Singh, CEO of Credo AI (Sydney Freedburg / Broken Defense)

WASHINGTON How do you stop the military from mistakenly making Skynet out of the Terminator movies? It seems there is a plan for that, and the Pentagon wants people to use it.

Three years ago, the Department of Defense adopted five general principles for the ethical use of AI in military missions, from eliminating racial bias in the training data for algorithms to building in kill switches if the AI ​​malfunctions. Last year, the Pentagon’s Digital and Artificial Intelligence Office (CDAO) turned these principles into a detailed implementation strategy for responsible artificial intelligence, promising to create a toolkit for arms acquisition officials that seeks to use general guidelines for programs. are their own And Last night, a senior adviser to the AI ​​team responsible for CDAOs said the toolkit would be released very soon and available online to anyone interested or concerned about the work of the DoD.

Michael K. “It’s a web application that you can access and it’s accessible to the public,” Johnson told reporters at a roundtable hosted by consulting giant Booz Allen Hamilton. It should be publicly available and usable, so our industry partners know exactly what our expectations are [and] So the public knows exactly how the Ministry of Defense thinks.

It will be released soon, Johnson told reporters. I can’t say when, but it’s very, very soon. And even non-US users are welcome, he continued: A key part of our defense strategy is integration and interoperability with partners. [abroad]. It is very important for projects like JADC2.

RELATED: In further tests of global information dominance, CDAO seeks to accelerate allies’ information sharing.

Johnson said at a public forum earlier in the evening that promoting understanding, transparency and cooperation among US officials, defense contractors and foreign allies is just one part of the Pentagon’s ambitious agenda. According to him, the grand strategic plan is to use the US military’s purchasing power to push evolving technology toward American ideals of openness and privacy and away from the Chinese Communist Party’s authoritarian view of artificial intelligence as a tool for control and It is advertising.

“He was really trying to shape this overall ecosystem because, surprisingly, there are others who are trying to shape this ecosystem very much like Belt and Road,” he said, referring to Xi Jinping’s much-touted “Belt and Road” initiative for Modernizing global trade in China, he said. Responsible AI sounds like this soft, squishy, ​​amorphous thing, but actually, I think it’s a huge source of soft power if we can develop technology that extends US values.

One of the things our team thinks about a lot is how do we encourage responsible AI? Johnson said. We think more of carrots than sticks. [and] One of the big carrots we have with the Ministry of Defense and its annual budget of 900 billion dollars is financing.

[So] How can we set these very clear requirements and criteria to demonstrate that your technology aligns with the DoD AI Code of Ethics and our values, rather than vague sleight of hand? he went on.

One step was last year’s Responsible AI Implementation Plan, which succeeded in turning the five general principles adopted in 2020 into 64 detailed lines of effort, from training the workforce to publishing model cards that explain how each AI model works. Works. But such formal programs are still static documents that program managers and their staff may struggle to apply to their unique situation.

So CDAO has developed an online tool to guide the user through the self-assessment process and help them understand how to implement the five principles and 64 lines of effort in a specific program. This software is intended to cover every stage of an application, from initial brainstorming through development and fielding to the final retirement of a technology, and provides guidance for specific user responsibilities regarding the application. adjust

Johnson told reporters that for any given user, the software is designed to answer a number of questions: What are the AI-responsible activities I need to do? How do I do those activities? What tools should I use? How do I identify risks? How do I identify opportunities? [How do I] Use them all and document them?

Johnson acknowledged that it will take time to work out all the bugs, and no one will have to use the tool, at least in its initial form. He emphasized to reporters: At this stage, it is only a voluntary tool or a voluntary resource, so it is not mandatory in any way. What we’re releasing is a starting point, version 1, and we’re validating it on a number of DoD use cases and updating it continuously.

#ethics #Pentagon #verge #launching #public #web #application #guide #responsible #development #Breaking #Defense
Image Source : breakingdefense.com

Leave a Comment