Security Snippets: DHS issues AI security and safety guidelines for critical infrastructure

DHS advises safeguards to protect AIs and to protect critical infrastructure from AI-powered attacks.

In continuing its work under the Biden Administration’s Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”, the Department of Homeland Security (DHS) has produced AI security and safety guidelines to mitigate cross-sector AI risks impacting the security of U.S. critical infrastructure.

The guidance, which builds on the Five Eyes intelligence agencies’ published report on AI security, is another resource developed by DHS and contributes to DHS’s broader efforts to protect the nation’s critical infrastructure.

The guidelines are organized around three categories of system-level risks:

  • Attacks Using AI: Threat actors leverage AI to enhance and scale attacks on critical infrastructure, including automated physical attacks deployed via autonomous systems, AI-enabled cyber compromises of supply chains, and using AI for autonomous malware and other operations such as social engineering and theft of intellectual property.
  • Attacks on AI Systems: Threat actors target AI systems supporting critical infrastructure. Activities include adversarial manipulation of AI algorithms or data, evasion attacks, interruption of service attacks, and model inversion & extraction.
  • Inaccuracies in AI Design and Implementation: Oversights in the planning, design, deployment, or operation of an AI tool or system can cause unintended effects that may disrupt critical infrastructure operations. This may include supply chain vulnerabilities, inconsistent system maintenance, over/under reliance on AI, brittleness of systems, and statistical bias.

In order to help critical infrastructure owners and users mitigate the AI risks above, DHS suggests several strategies that are aligned with the four AI Risk Management Framework functions outlined by NIST. These includes:

  • Establishing an organizational culture of AI risk management through the prioritization of safety and security outcomes and radical transparency.
  • Understanding the individual AI use context and risk profile by which AI risks can be evaluated.
  • Developing systems to assess, analyze, and track AI risks through the use of repeatable methods and metrics.
  • Prioritizing and acting upon AI risks to safety and security by implementing and maintaining identified risk management controls.  
Contacts
Nathan Salminen
Partner
Washington, D.C.
Pat Bruny
Associate
Washington, D.C.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.