Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

US DHS Announces New AI Guardrails

CIO Eric Hysen to Take on Additional Role as Agency's First Chief AI Officer
US DHS Announces New AI Guardrails
Image: Shutterstock

The U.S. Department of Homeland Security said it will eschew biased artificial intelligence decision-making and facial recognition systems as part of an ongoing federal effort to promote "trustworthy AI."

See Also: GDPR & Generative AI: A Guide for Customers

The department on Thursday published two edicts including a policy statement prohibiting personnel from using AI to profile or discriminate against individuals based on protected characteristics such as ethnicity, religion or national origin. The policy requires DHS to evaluate AI for discriminatory effects.

It also published a directive mandating the testing of facial recognition systems for bias or disparate impacts. The directive states that U.S. citizens can opt out of facial recognition - except when it is needed by law enforcement - and prohibits facial recognition from being used as the sole basis for an arrest.

U.S. Secretary of Homeland Security Alejandro Mayorkas also announced that departmental CIO Eric Hysen will serve as the "chief AI officer" while staying on in his original position.

An executive order signed by then-President Donald Trump during his final weeks in office established principles for the use of AI in government including that it be "accurate, reliable and effective." The Biden administration has since emphasized the role of human rights in AI. In October 2022, it published a blueprint for an AI bill of rights, and it has since obtained voluntary commitments from more than a dozen tech giants to build public confidence in AI and guard against national security threats (see: IBM, Nvidia, Others Commit to Develop 'Trustworthy' AI).

Fighting biased outcomes can be more difficult than might appear since AI systems may be used to discriminate even without the use of obviously biased inquiries. Eliminating close proxies for characteristics such as ethnicity by closing off obvious prompts based on ZIP code and income allows for the possibility that the same intentionally biased results could be obtained with inquiries based on other factors. AI is effective because it draws on vast pools of data, allowing computers to make connections between seemingly unrelated data points. With enough data, there's no need for a close proxy such as ZIP code.

Critics of facial recognition have also questioned whether human review will be sufficient to prevent wrongful arrests based on facial recognition matches. The New York Times reported in August that six individuals have reported being falsely accused of a crime as a result of a facial recognition search matching the photo of an unknown offender's face to a photo in a database. "You've got a very powerful tool that, if it searches enough faces, will always yield people who look like the person on the surveillance image," a psychology professor told the Times.

The two DHS edicts are the result of an artificial intelligence task force that Mayorkas formed in April. "Artificial intelligence is a powerful tool we must harness effectively and responsibly," Mayorkas said.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.eu, you agree to our use of cookies.