Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

How the US Government Views the Bright, Dark Sides of AI

White House, DOD, DHS Leaders Reveal How Their Agencies Use Artificial Intelligence
How the US Government Views the Bright, Dark Sides of AI
From left, DHS CIO Eric Hysen, White House Director Arati Prabhakar and Defense Department Chief Digital and AI Officer Craig Martell

The U.S. government is testing how artificial intelligence might enhance operations while preparing for the technology's downsides, such as more dangerous hacking attempts from nation-state adversaries, a congressional panel heard Thursday.

See Also: Safeguarding Election Integrity in the Digital Age

"The cybersecurity element is a great example of the bright and the dark side of AI technology," said Arati Prabhakar, director of the White House's office of science and technology policy. "The choices that all of our work focuses on is, 'How do we mitigate those risks and secure our cybersecurity systems?'"

Prabhakar testified Thursday alongside Homeland Security CIO Eric Hysen and Defense Department Chief Digital and AI Officer Craig Martell at a House of Representatives hearing focused on how federal agencies are harnessing artificial intelligence.

Hysen told lawmakers that Secretary of Homeland Security Alejandro Mayorkas has set up an artificial intelligence task force and directed it with examining how AI can be used to secure critical infrastructure. The Department of Homeland Security plans to partner with critical infrastructure organizations to safeguard their uses of artificial intelligence and strengthen their cybersecurity practices writ large to defend against emerging threats.

In written testimony, Hysen said that the Cybersecurity and Infrastructure Security Agency, a component of the department, is working to leverage AI to better detect and mitigate software vulnerabilities in defense of federal networks (see: Experts Probe AI Risks Around Malicious Use, China Influence).

Martell said he sees "tremendous potential" in using artificial intelligence to reduce the number of people working on administrative tasks and free up more military officers to focus on the core security mission. Although the Department of Defense uses artificial intelligence as decision support for officers, Martell said officers are still responsible for making decisions.

The Government's Role in Making AI Systems Safe, Effective

Prabhakar said the National Institute of Standards and Technology's AI risk management framework has been an important step in a longer journey toward creating safe and effective AI for both the public and private sector. The framework has pointed government and industry officials in the right direction when it comes to figuring out what questions to ask about how to make an AI system safe and effective, he said.

The Office of Science and Technology Policy published in October a blueprint for an AI bill of rights focused on the most essential values for an artificial intelligence system - notably, that it doesn't discriminate against people and that the system itself is safe and secure. From there, Prabhakar said, organizations must grapple with the processes that need to be put in place to manage the risk associated with AI systems.

Going forward, Prabhakar said, NIST should work with the technology community to develop tools and methods that help officials determine whether an AI system is safe and effective in a manner similar to processes used to assess the safety of physical products. But Prabhakar said that work still remains to be done.

"What we need is a future where AI systems are safe and effective, that they do what you need them to do and don't do dangerous things or inappropriate things that we don't want them to do," Prabhakar said. "But I think we should all be very clear that companies, researchers, nobody actually really quite knows how to do that."


About the Author

Michael Novinson

Michael Novinson

Managing Editor, Business, ISMG

Novinson is responsible for covering the vendor and technology landscape. Prior to joining ISMG, he spent four and a half years covering all the major cybersecurity vendors at CRN, with a focus on their programs and offerings for IT service providers. He was recognized for his breaking news coverage of the August 2019 coordinated ransomware attack against local governments in Texas as well as for his continued reporting around the SolarWinds hack in late 2020 and early 2021.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.eu, you agree to our use of cookies.